This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Container-Native Storage for OpenShift Container Platform
Deploying Container-Native Storage for OpenShift Container Platform 3.6
Edition 1
Abstract
Chapter 1. Introduction to Containerized Red Hat Gluster Storage Copy linkLink copied to clipboard!
The following table lists the Containerized Red Hat Gluster Storage solutions, a brief description, and the links to the documentation for more information about the solution.
Solution | Description | Documentation |
---|---|---|
Container-Native Storage (CNS) | This solution addresses the use-case where applications require both shared file storage and the flexibility of a converged infrastructure with compute and storage instances being scheduled and run from the same set of hardware. | For information on deploying CNS, see Chapter 2, Container-Native Storage for OpenShift Container Platform in this guide. |
Container Ready Storage (CRS) with Heketi | This solution addresses the use-case where a dedicated Gluster cluster is available external to the OpenShift Origin cluster, and you provision storage from the Gluster cluster. In this mode, Heketi also runs outside the cluster and and can be co-located with a Red Hat Gluster Storage node. | For information on configuring CRS with Heketi, see Complete Example of Dynamic Provisioning Using Dedicated GlusterFS. |
Container Ready Storage (CRS) without Heketi | This solution uses your OpenShift Container Platform cluster (without Heketi) to provision Red Hat Gluster Storage volumes (from a dedicated Red Hat Gluster Storage cluster) as persistent storage for containerized applications. | For information on creating OpenShift Container Platform cluster with persistent storage using Red Hat Gluster Storage, see Persistent Storage Using GlusterFS . |
Chapter 2. Container-Native Storage for OpenShift Container Platform Copy linkLink copied to clipboard!
- OpenShift provides the platform as a service (PaaS) infrastructure based on Kubernetes container management. Basic OpenShift architecture is built around multiple master systems where each system contains a set of nodes.
- Red Hat Gluster Storage provides the containerized distributed storage based on Red Hat Gluster Storage 3.3 container. Each Red Hat Gluster Storage volume is composed of a collection of bricks, where each brick is the combination of a node and an export directory.
- Heketi provides the Red Hat Gluster Storage volume life cycle management. It creates the Red Hat Gluster Storage volumes dynamically and supports multiple Red Hat Gluster Storage clusters.
- Create multiple persistent volumes (PV) and register these volumes with OpenShift.
- Developers then submit a persistent volume claim (PVC).
- A PV is identified and selected from a pool of available PVs and bound to the PVC.
- The OpenShift pod then uses the PV for persistent storage.
Figure 2.1. Architecture - Container-Native Storage for OpenShift Container Platform
Chapter 3. Container-Ready Storage for OpenShift Container Platform Copy linkLink copied to clipboard!
- OpenShift Container Platform administrators might not want to manage storage. Container-Ready Storage separates storage management from container management.
- Leverage legacy storage (SAN, Arrays, Old filers): Customers often have storage arrays from traditional storage vendors that have either limited or no support for OpenShift. Container-Ready Storage mode allows users to leverage existing legacy storage for OpenShift Containers.
- Cost effective: In environments where costs related to new infrastructure is a challenge, they can re-purpose their existing storage arrays to back OpenShift under Container-Ready Storage. Container-Ready Storage is perfect for such situations where one can run Red Hat Gluster Storage inside a VM and serve out LUNs or disks from these storage arrays to OpenShift offering all of the features that the OpenShift storage subsystem has to offer including dynamic provisioning. This is a very useful solution in those environments with potential infrastructure additions.
Chapter 4. Install and Upgrade Workflow: What Tasks Do I Need To Complete? Copy linkLink copied to clipboard!
4.1. (Install) Existing Environment: OpenShift Container Platform and Container-Native Storage are not installed Copy linkLink copied to clipboard!
4.1.1. Customer Objective Copy linkLink copied to clipboard!
4.1.2. Prerequisites Copy linkLink copied to clipboard!
- Install the registry with NFS backend when installing OpenShift Container Platform.
- Do not install Logging and Metrics when installing OpenShift Container Platform.
4.1.3. Required Installation Tasks Copy linkLink copied to clipboard!
- Migrate registry back-end to Gluster: Migrating Registry
- To use Block Storage: Block Storage
- To set Gluster-block as backend for Logging and Metrics: Logging and Metrics
- To use File Storage: File Storage
4.2. (Install) Existing Environment: OpenShift Container Platform and Container-Ready Storage are not installed Copy linkLink copied to clipboard!
4.2.1. Customer Objective Copy linkLink copied to clipboard!
4.2.2. Prerequisites Copy linkLink copied to clipboard!
- Install the registry with NFS backend when installing OpenShift Container Platform.
- Do not install Logging and Metrics when installing OpenShift Container Platform.
4.2.3. Required Installation Tasks Copy linkLink copied to clipboard!
- Migrate registry backend to Gluster: Migrating Registry
- To use Block Storage: Block Storage
- To set Gluster-block as backend for logging and metrics: Logging and Metrics
- To use File Storage: File Storage
4.3. (Install) Existing Environment: OpenShift Container Platform 3.6 is installed and Container-Native Storage 3.6 is not installed Copy linkLink copied to clipboard!
4.3.1. Customer Objective Copy linkLink copied to clipboard!
4.3.2. Prerequisites Copy linkLink copied to clipboard!
4.3.3. Required Installation Tasks Copy linkLink copied to clipboard!
- If the registry was not setup during OpenShift Container Platform 3.6 installation, make sure to follow the advanced installation of OpenShift Container Platform 3.6 to setup registry with NFS as the backend. The ansible variable to be set is
openshift_hosted_registry_storage_kind=nfs
: Advanced InstallationRefer section 2.6.3.9: Configuring the OpenShift Container Registry. - Migrate registry backend to Gluster: Migrating Registry
- To use Block Storage: Block Storage
- To set Gluster-block as backend for logging and metrics: Logging and Metrics
- To use File Storage: File Storage
4.4. (Install) Existing Environment: OpenShift Container Platform 3.6 is installed and Container-Ready Storage is not installed Copy linkLink copied to clipboard!
4.4.1. Customer Objective Copy linkLink copied to clipboard!
4.4.2. Prerequisites Copy linkLink copied to clipboard!
4.4.3. Required Installation Tasks Copy linkLink copied to clipboard!
- If the registry was not set up during OpenShift Container Platform installation, make sure to follow the advanced installation of OpenShift Container Platform to setup registry with NFS as the backend. The ansible variable to be set is
openshift_hosted_registry_storage_kind=nfs
: Advanced Installation - Migrating the registry backend to gluster: Migrating Registry
- To use block storage: Block Storage
- To set Gluster Block as back-end for Logging and Metrics: Logging and Metrics
- To use File Storage: File Storage
4.5. (Upgrade) Existing Environment: OpenShift Container Platform 3.6 is installed and Container-Native Storage is installed Copy linkLink copied to clipboard!
4.5.1. Customer Objective Copy linkLink copied to clipboard!
- OpenShift Container Platform 3.6 is installed and Container-Native Storage 3.5 is installed with Advanced Installer and Registry
- OpenShift Container Platform 3.6 is installed and Container-Native Storage 3.6 is installed with Advanced Installer and Registry
- OpenShift Container Platform 3.6 is installed and Container-Native Storage 3.5 is installed using cns-deploy tool.
4.5.2. Required Upgrade Tasks Copy linkLink copied to clipboard!
- If the registry was not set up during OpenShift Container Platform installation, make sure to follow the advanced installation of OpenShift Container Platform to setup registry with NFS as the backend. The ansible variable to be set is
openshift_hosted_registry_storage_kind=nfs
: Advanced Installation - Migrating the registry backend to gluster: Migrating Registry
- To use block storage: Block Storage
- To set Gluster Block as back-end for Logging and Metrics: Logging and Metrics
- To use File Storage: File Storage
4.6.1. Customer Objective Copy linkLink copied to clipboard!
4.6.2. Required Upgrade Tasks Copy linkLink copied to clipboard!
- If the registry was not set up during OpenShift Container Platform installation, make sure to follow the advanced installation of OpenShift Container Platform to setup registry with NFS as the backend. The ansible variable to be set is
openshift_hosted_registry_storage_kind=nfs
: Advanced Installation Note
Execute only the steps that are relevant to your environment.- Migrating the registry backend to gluster: Migrating Registry
- To use block storage: Block Storage
- To set Gluster Block as back-end for Logging and Metrics: Logging and Metrics
- To use File Storage: File Storage
Chapter 5. Support Requirements Copy linkLink copied to clipboard!
5.1. Supported Versions Copy linkLink copied to clipboard!
OpenShift Container Platform | Red Hat Gluster Storage | Container-Native Storage |
---|---|---|
3.6 | 3.3 | 3.6 |
3.5 | 3.2 | 3.5 |
5.2. Environment Requirements Copy linkLink copied to clipboard!
5.2.1.1. Setting up the Openshift Master as the Client Copy linkLink copied to clipboard!
oc
commands across the cluster when installing OpenShift. Generally, this is setup as a non-scheduled node in the cluster. This is the default configuration when using the OpenShift installer. You can also choose to install their client on their local machine to access the cluster remotely. For more information, see https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/cli-reference/#installing-the-cli.
Execute the following commands to install heketi-client
and the cns-deploy
packages.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
yum install cns-deploy heketi-client
# yum install cns-deploy heketi-client
subscription-manager repos --disable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --disable=rh-gluster-3-for-rhel-7-server-rpms
Execute the following commands to install heketi-client
and the cns-deploy
packages.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
yum install cns-deploy heketi-client
# yum install cns-deploy heketi-client
subscription-manager repos --disable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --disable=rh-gluster-3-for-rhel-7-server-rpms
If you are using OpenShift Container Platform 3.6, subscribe to 3.6 repository to enable you to install the Openshift client packages
subscription-manager repos --enable=rhel-7-server-ose-3.6-rpms --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-ose-3.6-rpms --enable=rhel-7-server-rpms
yum install atomic-openshift-clients
# yum install atomic-openshift-clients
yum install atomic-openshift
# yum install atomic-openshift
5.2.3. Red Hat OpenShift Container Platform Requirements Copy linkLink copied to clipboard!
- Configuring Multipathing on all Initiators
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the client pods are hosted:
- To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
yum install iscsi-initiator-utils device-mapper-multipath
# yum install iscsi-initiator-utils device-mapper-multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable multipath, execute the following command:
mpathconf --enable
# mpathconf --enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the following content to the
devices
section in the/etc/multipath.conf
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to restart services:
systemctl restart multipathd
# systemctl restart multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The OpenShift cluster must be up and running. For information on setting up OpenShift cluster, see https://access.redhat.com/documentation/en/openshift-container-platform/3.6/paged/installation-and-configuration.
- A cluster-admin user must be created. For more information, see Appendix B, Cluster Administrator Setup
- All OpenShift nodes on Red Hat Enterprise Linux systems must have glusterfs-client RPMs (glusterfs, glusterfs-client-xlators, glusterfs-libs, glusterfs-fuse) installed.
- It is recommended to persist the logs for the Heketi container. For more information on persisting logs, refer https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#install-config-aggregate-logging.
5.2.4. Red Hat Gluster Storage Requirements Copy linkLink copied to clipboard!
- Installation of Heketi packages must have valid subscriptions to Red Hat Gluster Storage Server repositories.
- Red Hat Gluster Storage installations must adhere to the requirements outlined in the Red Hat Gluster Storage Installation Guide.
- The versions of Red Hat Enterprise OpenShift and Red Hat Gluster Storage integrated must be compatible, according to the information in Section 5.1, “Supported Versions” section.
- A fully-qualified domain name must be set for Red Hat Gluster Storage server node. Ensure that the correct DNS records exist, and that the fully-qualified domain name is resolvable via both forward and reverse DNS lookup.
Important
- After a snapshot is created, it must be accessed though the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
5.2.5. Planning Guidelines Copy linkLink copied to clipboard!
- Sizing guidelines on Container-Native Storage 3.6 or Container-Ready Storage 3.6:
- Persistent volumes backed by the file interface: For typical operations, size for 300-500 persistent volumes backed by files per three-node Container-Native Storage or Container-Ready Storage cluster. The maximum limit of supported persistent volumes backed by the file interface is 1000 persistent volumes per three-node cluster in a Container-Native Storage or Container-Ready Storage deployment. Considering that micro-services can dynamically scale as per demand, it is recommended that the initial sizing keep sufficient headroom for the scaling. If additional scaling is needed, add a new three-node Container-Native Storage or Container-Ready Storage cluster to support additional persistent volumesCreation of more than 1,000 persistent volumes per trusted storage pool is not supported for file-based storage.
- Persistent volumes backed by block-based storage: Size for a maximum of 300 persistent volumes per three-node Container-Native Storage or Container-Ready Storage cluster. Be aware that Container-Native Storage 3.6 and Container-Ready Storage 3.6 supports only OpenShift Container Platform logging and metrics on block-backed persistent volumes.
- Persistent volumes backed by file and block: Size for 300-500 persistent volumes (backed by files) and 100-200 persistent volumes (backed by block). Do not exceed these maximum limits of file or block-backed persistent volumes or the combination of a maximum 1000 persistent volumes per three-node Container-Native Storage or Container-Ready Storage cluster.
- 3-way distributed-replicated volumes is the only supported volume type.
- Each physical or virtual node that hosts a Red Hat Gluster Storage Container-Native Storage or Container-Ready Storage peer requires the following:
- a minimum of 8 GB RAM and 30 MB per persistent volume.
- the same disk type.
- the heketidb utilises 2 GB distributed replica volume.
- Deployment guidelines on Container-Native Storage 3.6 or Container-Ready Storage 3.6:
- In Container-Native Storage mode, you can install the Container-Native Storage nodes, Heketi, and all provisioner pods on OpenShift Container Platform Infrastructure nodes or OpenShift Container Platform Application nodes.
- In Container-Ready Storage mode, you can install Heketi and all provisioners pods on OpenShift Container Platform Infrastructure nodes or on OpenShift Container Platform Application nodes
- Red Hat Gluster Storage Container Native with OpenShift Container Platform supports up to 14 snapshots per volume by default (snap-max-hard-limit =14 in Heketi Template).
Chapter 6. Setting up Container-Native Storage Copy linkLink copied to clipboard!
6.1. Configuring Port Access Copy linkLink copied to clipboard!
- On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to
/etc/sysconfig/iptables
in order to open the required ports:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/chap-getting_started.- Execute the following command to reload the iptables:
systemctl reload iptables
# systemctl reload iptables
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on each node to verify if the iptables are updated:
iptables -L
# iptables -L
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Enabling Kernel Modules Copy linkLink copied to clipboard!
cns-deploy
tool, you must ensure that the dm_thin_pool
, dm_multipath
, and target_core_user
modules are loaded in the OpenShift Container Platform node. Execute the following command on all OpenShift Container Platform nodes to verify if the modules are loaded:
lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_pool
lsmod | grep dm_multipath
# lsmod | grep dm_multipath
lsmod | grep target_core_user
# lsmod | grep target_core_user
modprobe dm_thin_pool
# modprobe dm_thin_pool
modprobe dm_multipath
# modprobe dm_multipath
modprobe target_core_user
# modprobe target_core_user
Note
cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf
dm_thin_pool
cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf
dm_multipath
cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf
target_core_user
6.3. Starting and Enabling Services Copy linkLink copied to clipboard!
systemctl add-wants multi-user rpcbind.service systemctl enable rpcbind.service systemctl start rpcbind.service
# systemctl add-wants multi-user rpcbind.service
# systemctl enable rpcbind.service
# systemctl start rpcbind.service
Chapter 7. Setting up Container-Ready Storage Copy linkLink copied to clipboard!
7.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install) Copy linkLink copied to clipboard!
Important
/var
partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Container-Ready Storage is supported only on Red Hat Enterprise Linux 7.Register the System with Subscription Manager
Run the following command and enter your Red Hat Network user name and password to register the system with the Red Hat Network:subscription-manager register
# subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:subscription-manager list --available
# subscription-manager list --available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux Server
andRed Hat Gluster Storage
entitlements to the system. Run the following command to attach the entitlements:subscription-manager attach --pool=[POOLID]
# subscription-manager attach --pool=[POOLID]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Required Channels
For Red Hat Gluster Storage 3.3 on Red Hat Enterprise Linux 7.x- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:yum repolist
# yum repolist
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages
Ensure that all packages are up to date by running the following command.yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
If any kernel packages are updated, reboot the system with the following command.shutdown -r now
# shutdown -r now
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Kernel Version Requirement
Container-Ready Storage requires the kernel-3.10.0-690.el7 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:rpm -q kernel
# rpm -q kernel kernel-3.10.0-693.el7.x86_64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow uname -r
# uname -r 3.10.0-693.el7.x86_64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:yum install redhat-storage-server
# yum install redhat-storage-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable gluster-block execute the following command:
yum install gluster-block
# yum install gluster-block
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reboot
Reboot the system.
7.2. Configuring Port Access Copy linkLink copied to clipboard!
firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
7.3. Enabling Kernel Modules Copy linkLink copied to clipboard!
- You must ensure that the
dm_thin_pool
andtarget_core_user
modules are loaded in the Red Hat Gluster Storage nodes.modprobe target_core_user
# modprobe target_core_user
Copy to Clipboard Copied! Toggle word wrap Toggle overflow modprobe dm_thin_pool
# modprobe dm_thin_pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow lsmod | grep target_core_user
# lsmod | grep target_core_user
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf target_core_user
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You must ensure that the
dm_multipath
module is loaded on all OpenShift Container Platform nodes.modprobe dm_multipath
# modprobe dm_multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_multipath
# lsmod | grep dm_multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf dm_multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Starting and Enabling Services Copy linkLink copied to clipboard!
systemctl start sshd
# systemctl start sshd
systemctl enable sshd
# systemctl enable sshd
systemctl start glusterd
# systemctl start glusterd
systemctl enable glusterd
# systemctl enable glusterd
systemctl start gluster-blockd
# systemctl start gluster-blockd
systemctl enable gluster-blockd
# systemctl enable gluster-blockd
Chapter 8. Setting up the Environment Copy linkLink copied to clipboard!
8.1. Preparing the Red Hat OpenShift Container Platform Cluster Copy linkLink copied to clipboard!
- On the master or client, execute the following command to login as the cluster admin user:
oc login
# oc login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On the master or client, execute the following command to create a project, which will contain all the containerized Red Hat Gluster Storage services:
oc new-project <project_name>
# oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc new-project storage-project
# oc new-project storage-project Now using project "storage-project" on server "https://master.example.com:8443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
oadm policy add-scc-to-user privileged -z default
# oadm policy add-scc-to-user privileged -z default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following steps on the master to set up the router:
Note
If a router already exists, proceed to Step 5. To verify if the router is already deployed, execute the following command:oc get dc --all-namespaces
# oc get dc --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to enable the deployment of the router:
oadm policy add-scc-to-user privileged -z router
# oadm policy add-scc-to-user privileged -z router
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the router:
oadm router storage-project-router --replicas=1
# oadm router storage-project-router --replicas=1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the subdomain name in the config.yaml file located at
/etc/origin/master/master-config.yaml
.For example:subdomain: "cloudapps.mystorage.com"
subdomain: "cloudapps.mystorage.com"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the master OpenShift services by executing the following command:
systemctl restart atomic-openshift-master
# systemctl restart atomic-openshift-master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For OpenShift Container Platform 3.7 execute the following command to restart the services :systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If the router setup fails, use the port forward method as described in Appendix C, Client Configuration using Port Forwarding .
For more information regarding router setup, see https://access.redhat.com/documentation/en/openshift-container-platform/3.6/paged/installation-and-configuration/chapter-4-setting-up-a-router - Execute the following command to verify if the router is running:
oc get dc <router_name>
# oc get dc <router_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get dc storage-project-router
# oc get dc storage-project-router NAME REVISION DESIRED CURRENT TRIGGERED BY storage-project-router 1 1 1 config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure you do not edit the/etc/dnsmasq.conf
file until the router has started. - After the router is running, the client has to be setup to access the services in the OpenShift cluster. Execute the following steps on the client to set up the DNS.
- Execute the following command to find the IP address of the router:
oc get pods -o wide --all-namespaces | grep router
# oc get pods -o wide --all-namespaces | grep router storage-project storage-project-router-1-cm874 1/1 Running 119d 10.70.43.132 dhcp43-132.lab.eng.blr.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the /etc/dnsmasq.conf file and add the following line to the file:
address=/.cloudapps.mystorage.com/<Router_IP_Address>
address=/.cloudapps.mystorage.com/<Router_IP_Address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, Router_IP_Address is the IP address of the node where the router is running. - Restart the
dnsmasq
service by executing the following command:systemctl restart dnsmasq
# systemctl restart dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit /etc/resolv.conf and add the following line:
nameserver 127.0.0.1
nameserver 127.0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information regarding setting up the DNS, see https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#envirornment-requirements.
8.2. Deploying Containerized Red Hat Gluster Storage Solutions Copy linkLink copied to clipboard!
cns-deploy
tool.
Note
- It is recommended that a separate cluster for OpenShift Container Platform infrastructure workload (registry, logging and metrics) and application pod storage. Hence, if you have more than 6 nodes ensure you create multiple clusters with a minimum of 3 nodes each. The infrastructure cluster should belong to the
default
project namespace. - If you want to enable encryption on the Container-Native Storage setup, refer Chapter 17, Enabling Encryption before proceeding with the following steps.
- You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Note
Copy the topology file from the default location to your location and then edit it:cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
# cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.Important
Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool. To resolve this issue, restart the trusted storage pool which contains the Heketi volume.
8.2.1. Deploying Container-Native Storage Copy linkLink copied to clipboard!
- Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -n <namespace> -g --admin-key <Key> topology.json
# cns-deploy -n <namespace> -g --admin-key <Key> topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Container-Native Storage is under technology preview. To deploy S3 compatible object store in Container-Native Storage see Step 1a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see Section 9.2, “Block Storage”). This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
8.2.2. Deploying Container-Ready Storage Copy linkLink copied to clipboard!
- To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:
ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
# ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:
cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
# cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see Step 2a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see Section 9.2, “Block Storage”). This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption, and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:
- Execute the following command to enable brick multiplexing:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the heketidb volumes:
gluster vol stop heketidbstorage
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster vol start heketidbstorage
# gluster vol start heketidbstorage volume start: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
Chapter 9. Creating Persistent Volumes Copy linkLink copied to clipboard!
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.
9.1. File Storage Copy linkLink copied to clipboard!
9.1.1. Static Provisioning of Volumes Copy linkLink copied to clipboard!
/usr/share/heketi/templates/
directory.
Note
cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<path>/gluster-endpoints.yaml
# cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<path>/gluster-endpoints.yaml
- To specify the endpoints you want to create, update the copied
sample-gluster-endpoints.yaml
file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.Copy to Clipboard Copied! Toggle word wrap Toggle overflow name: is the name of the endpointip: is the ip address of the Red Hat Gluster Storage nodes. - Execute the following command to create the endpoints:
oc create -f <name_of_endpoint_file>
# oc create -f <name_of_endpoint_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f sample-gluster-endpoints.yaml
# oc create -f sample-gluster-endpoints.yaml endpoints "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the endpoints are created, execute the following command:
oc get endpoints
# oc get endpoints
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a gluster service:
oc create -f <name_of_service_file>
# oc create -f <name_of_service_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f sample-gluster-service.yaml
# oc create -f sample-gluster-service.yaml service "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the service is created, execute the following command:
oc get service
# oc get service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The endpoints and the services must be created for each project that requires a persistent storage. - Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
You must manually add the Labels information to the .json file.Following is the example YAML file for reference:Copy to Clipboard Copied! Toggle word wrap Toggle overflow name: The name of the volume.storage: The amount of storage allocated to this volumeglusterfs: The volume type being used, in this case the glusterfs plug-inendpoints: The endpoints name that defines the trusted storage pool createdpath: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.lables: Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.Note
- heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to
oc create -f -
to create the persistent volume immediately. - If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the
heketi-cli volume list
command. This command lists the cluster name. You can then update the endpoint information in thepv001.json
file accordingly. - When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
- If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
- Edit the pv001.json file and enter the name of the endpoint in the endpoint's section:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a persistent volume by executing the following command:
oc create -f pv001.json
# oc create -f pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f pv001.json
# oc create -f pv001.json persistentvolume "glusterfs-4fc22ff9" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is created, execute the following command:
oc get pv
# oc get pv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Available 4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a persistent volume claim file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Bind the persistent volume to the persistent volume claim by executing the following command:
oc create -f pvc.yaml
# oc create -f pvc.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f pvc.yaml
# oc create -f pvc.yaml persistentvolumeclaim"glusterfs-claim" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
oc get pv oc get pvc
# oc get pv # oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Bound storage-project/glusterfs-claim 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-4fc22ff9 100Gi RWX 11s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The claim can now be used in the application:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
9.1.2. Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
Note
9.1.2.1. Configuring Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
9.1.2.1.1. Registering a Storage Class Copy linkLink copied to clipboard!
- To create a storage class execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolvolumetype: It specifies the volume type that is being used.Note
Distributed-Three-way replication is the only supported volume type.clusterid: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma separated cluster IDs. This is an optional parameter.Note
To get the cluster ID, execute the following command:heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow secretNamespace + secretName: Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.Note
When the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted. - To register the storage class to Openshift, execute the following command:
oc create -f glusterfs-storageclass.yaml
# oc create -f glusterfs-storageclass.yaml storageclass "gluster-container" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the storage class, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2.1.2. Creating Secret for Heketi Authentication Copy linkLink copied to clipboard!
Note
admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Container-Native Storage, then the following steps can be omitted.
- Create an encoded value for the password by executing the following command:
echo -n "<key>" | base64
# echo -n "<key>" | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where “key” is the value for "admin-key
" that was created while deploying Container-Native StorageFor example:echo -n "mypassword" | base64
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a secret file. A sample secret file is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the secret on Openshift by executing the following command:
oc create -f glusterfs-secret.yaml
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2.1.3. Creating a Persistent Volume Claim Copy linkLink copied to clipboard!
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the claim by executing the following command:
oc create -f glusterfs-pvc-claim1.yaml
# oc create -f glusterfs-pvc-claim1.yaml persistentvolumeclaim "claim1" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the claim, execute the following command:
oc describe pvc <claim_name>
# oc describe pvc <claim_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2.1.4. Verifying Claim Creation Copy linkLink copied to clipboard!
- To get the details of the persistent volume claim and persistent volume, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To validate if the endpoint and the services are created as part of claim creation, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2.1.5. Using the Claim in a Pod Copy linkLink copied to clipboard!
- To use the claim in the application, for example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2.1.6. Deleting a Persistent Volume Claim Copy linkLink copied to clipboard!
- To delete a claim, execute the following command:
oc delete pvc <claim-name>
# oc delete pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete pvc claim1
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the claim is deleted, execute the following command:
oc get pvc <claim-name>
# oc get pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pvc claim1
# oc get pvc claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
oc get pv <pv-name>
# oc get pv <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the endpoints are deleted, execute the following command:
oc get endpoints <endpointname>
# oc get endpoints <endpointname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get endpoints gluster-dynamic-claim1
# oc get endpoints gluster-dynamic-claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the service is deleted, execute the following command:
oc get service <servicename>
# oc get service <servicename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get service gluster-dynamic-claim1
# oc get service gluster-dynamic-claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.3. Volume Security Copy linkLink copied to clipboard!
To create a statically provisioned volume with a GID, execute the following command:
heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
Two new parameters, gidMin and gidMax, are introduced with dynamic provisioner. These values allows the administrator to configure the GID range for the volume in the storage class. To set up the GID values and provide volume security for dynamically provisioned volumes, execute the following commands:
- Create a storage class file with the GID values. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If the gidMin and gidMax value are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647. - Create a persistent volume claim. For more information see, Section 9.1.2.1.3, “Creating a Persistent Volume Claim”
- Use the claim in the pod. Ensure that this pod is non-privileged. For more information see, Section 9.1.2.1.5, “Using the Claim in a Pod”
- To verify if the GID is within the range specified, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow id
$ id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:id
$ id uid=1000060000 gid=0(root) groups=0(root),2001
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, 2001 in the above output is the allocated GID for the persistent volume, which is within the range specified in the storage class. You can write to this volume with the allocated GID.Note
When the persistent volume claim is deleted, the GID of the persistent volume is released from the pool.
9.2. Block Storage Copy linkLink copied to clipboard!
Note
9.2.1. Dynamic Provisioning of Volumes for Block Storage Copy linkLink copied to clipboard!
Note
9.2.1.1. Configuring Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
9.2.1.1.1. Configuring Multipathing on all Initiators Copy linkLink copied to clipboard!
- To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
yum install iscsi-initiator-utils device-mapper-multipath
# yum install iscsi-initiator-utils device-mapper-multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable multipath, execute the following command:
mpathconf --enable
# mpathconf --enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and add the following content to the multipath.conf file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to restart the multipath service:
systemctl restart multipathd
# systemctl restart multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.2. Creating Secret for Heketi Authentication Copy linkLink copied to clipboard!
Note
admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Container-Native Storage, then the following steps can be omitted.
- Create an encoded value for the password by executing the following command:
echo -n "<key>" | base64
# echo -n "<key>" | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where “key
” is the value foradmin-key
that was created while deploying CNSFor example:echo -n "mypassword" | base64
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a secret file. A sample secret file is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the secret on Openshift by executing the following command:
oc create -f glusterfs-secret.yaml
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.3. Registering a Storage Class Copy linkLink copied to clipboard!
- Create a storage class. A sample storage class file is presented below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolrestsecretnamespace + restsecretname : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when bothrestsecretnamespace
andrestsecretname
are omitted.hacount: It is the count of the number of paths to the block target server.hacount
provides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths.clusterids: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma separated cluster IDs. This is an optional parameter.Note
To get the cluster ID, execute the following command:heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chapauthenabled: If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter. - To register the storage class to Openshift, execute the following command:
oc create -f glusterfs-block-storageclass.yaml
# oc create -f glusterfs-block-storageclass.yaml storageclass "gluster-block" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the storage class, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.4. Creating a Persistent Volume Claim Copy linkLink copied to clipboard!
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the claim by executing the following command:
oc create -f glusterfs-block-pvc-claim.yaml
# oc create -f glusterfs-block-pvc-claim.yaml persistentvolumeclaim "claim1" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the claim, execute the following command:
oc describe pvc <claim_name>
# oc describe pvc <claim_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.5. Verifying Claim Creation Copy linkLink copied to clipboard!
- To get the details of the persistent volume claim and persistent volume, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.6. Using the Claim in a Pod Copy linkLink copied to clipboard!
- To use the claim in the application, for example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.7. Deleting a Persistent Volume Claim Copy linkLink copied to clipboard!
- To delete a claim, execute the following command:
oc delete pvc <claim-name>
# oc delete pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete pvc claim1
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the claim is deleted, execute the following command:
oc get pvc <claim-name>
# oc get pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pvc claim1
# oc get pvc claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
oc get pv <pv-name>
# oc get pv <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Updating the Registry with Container-Native Storage as the Storage Back-end Copy linkLink copied to clipboard!
10.1. Validating the Openshift Container Platform Registry Deployment Copy linkLink copied to clipboard!
- On the master or client, execute the following command to login as the cluster admin user:
oc login
# oc login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are not automatically logged into project default, then switch to it by executing the following command:oc project default
# oc project default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the pod is created, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pods
# oc get pods NAME READY STATUS RESTARTS AGE docker-registry-2-mbu0u 1/1 Running 4 6d docker-registry-2-spw0o 1/1 Running 3 6d registry-console-1-rblwo 1/1 Running 3 6d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the endpoints are created, execute the following command:
oc get endpoints
# oc get endpoints
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is created, execute the following command:
oc get pv
# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE registry-volume 5Gi RWX Retain Bound default/registry-claim 7d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To obtain the details of the persistent volume that was created for the NFS registry, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Converting the Openshift Container Platform Registry with Container-Native Storage Copy linkLink copied to clipboard!
Execute the following commands to create a Red Hat Gluster Storage volume to store the registry data and create a persistent volume.
Note
default
project.
- Login to the
default
project:oc project default
# oc project default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project default
# oc project default Now using project "default" on server "https://cns30.rh73:8443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the
gluster-registry-endpoints.yaml
file:oc get endpoints heketi-storage-endpoints -o yaml --namespace=storage-project > gluster-registry-endpoints.yaml
# oc get endpoints heketi-storage-endpoints -o yaml --namespace=storage-project > gluster-registry-endpoints.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You must create an endpoint for each project from which you want to utilize the Red Hat Gluster Storage registry. Hence, you will have a service and an endpoint in both thedefault
project and the new project (storage-project
) created in earlier steps. - Edit the
gluster-registry-endpoints.yaml
file. Remove all the metadata except forname
, leaving everything else the same.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the endpoint:
oc create -f gluster-registry-endpoints.yaml
# oc create -f gluster-registry-endpoints.yaml endpoints "gluster-registry-endpoints" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify the creation of the endpoint, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the
gluster-registry-service.yaml
file:oc get services heketi-storage-endpoints -o yaml --namespace=storage-project > gluster-registry-service.yaml
# oc get services heketi-storage-endpoints -o yaml --namespace=storage-project > gluster-registry-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
gluster-registry-service.yaml
file. Remove all the metadata except for name. Also, remove the specific cluster IP addresses:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the service:
oc create -f gluster-registry-service.yaml
# oc create -f gluster-registry-service.yaml services "gluster-registry-service" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if the service are running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to obtain the fsGroup GID of the existing docker-registry pods:
export GID=$(oc get po --selector="docker-registry=default" -o go-template --template='{{printf "%.0f" ((index .items 0).spec.securityContext.fsGroup)}}')
# export GID=$(oc get po --selector="docker-registry=default" -o go-template --template='{{printf "%.0f" ((index .items 0).spec.securityContext.fsGroup)}}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a volume
heketi-cli volume create --size=5 --name=gluster-registry-volume --gid=${GID}
# heketi-cli volume create --size=5 --name=gluster-registry-volume --gid=${GID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the persistent volume file for the Red Hat Gluster Storage volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the persistent volume:
oc create -f gluster-registry-volume.yaml
# oc create -f gluster-registry-volume.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify and get the details of the created persistent volume:
oc get pv/gluster-registry-volume
# oc get pv/gluster-registry-volume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE gluster-registry-volume 5Gi RWX Retain Available 21m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new persistent volume claim. Following is a sample Persistent Volume Claim that will be used to replace the existing registry-storage volume claim.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the persistent volume claim by executing the following command:
oc create -f gluster-registry-claim.yaml
# oc create -f gluster-registry-claim.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f gluster-registry-claim.yaml
# oc create -f gluster-registry-claim.yaml persistentvolumeclaim "gluster-registry-claim" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if the claim is bound:
oc get pvc/gluster-registry-claim
# oc get pvc/gluster-registry-claim
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pvc/gluster-registry-claim
# oc get pvc/gluster-registry-claim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-registry-claim Bound gluster-registry-volume 5Gi RWX 22s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you want to migrate the data from the old registry to the Red Hat Gluster Storage registry, then execute the following commands:
Note
These steps are optional.- Make the old registry readonly by executing the following command:
oc set env dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED=true
# oc set env dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the Red Hat Gluster Storage registry to the old registry deployment configuration (dc) by executing the following command:
oc volume dc/docker-registry --add --name=gluster-registry-storage -m /gluster-registry -t pvc --claim-name=gluster-registry-claim
# oc volume dc/docker-registry --add --name=gluster-registry-storage -m /gluster-registry -t pvc --claim-name=gluster-registry-claim
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the Registry pod name by executing the following command:
export REGISTRY_POD=$(oc get po --selector="docker-registry=default" -o go-template --template='{{printf "%s" ((index .items 0).metadata.name)}}')
# export REGISTRY_POD=$(oc get po --selector="docker-registry=default" -o go-template --template='{{printf "%s" ((index .items 0).metadata.name)}}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run rsync of data from old registry to the Red Hat Gluster Storage registry by executing the following command:
oc rsync $REGISTRY_POD:/registry/ $REGISTRY_POD:/gluster-registry/
# oc rsync $REGISTRY_POD:/registry/ $REGISTRY_POD:/gluster-registry/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the Red Hat Gluster Storage registry form the old dc registry by executing the following command:
oc volume dc/docker-registry --remove --name=gluster-registry-storage
# oc volume dc/docker-registry --remove --name=gluster-registry-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Swap the existing registry storage volume for the new Red Hat Gluster Storage volume by executing the following command:
oc volume dc/docker-registry --add --name=registry-storage -t pvc --claim-name=gluster-registry-claim --overwrite
# oc volume dc/docker-registry --add --name=registry-storage -t pvc --claim-name=gluster-registry-claim --overwrite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Make the registry read write by executing the following command:
oc set env dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED-
# oc set env dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment Copy linkLink copied to clipboard!
- To list the pods, execute the following command :
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Following are the gluster pods from the above example:glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com
glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The topology.json file will provide the details of the nodes in a given Trusted Storage Pool (TSP) . In the above example all the 3 Red Hat Gluster Storage nodes are from the same TSP. - To enter the gluster pod shell, execute the following command:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-dc-node1.example.com
# oc rsh glusterfs-dc-node1.example.com sh-4.2#
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the peer status, execute the following command:
gluster peer status
# gluster peer status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To list the gluster volumes on the Trusted Storage Pool, execute the following command:
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the volume status, execute the following command:
gluster volume status <volname>
# gluster volume status <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To use the snapshot feature, load the snapshot module using the following command:
- modprobe dm_snapshot
# - modprobe dm_snapshot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Restrictions for using Snapshot- After a snapshot is created, it must be accessed though the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
- To take the snapshot of the gluster volume, execute the following command:
gluster snapshot create <snapname> <volname>
# gluster snapshot create <snapname> <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6
# gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6 snapshot create: success: Snap snap1_GMT-2016.07.29-13.05.46 created successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To list the snapshots, execute the following command:
gluster snapshot list
# gluster snapshot list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To delete a snapshot, execute the following command:
gluster snap delete <snapname>
# gluster snap delete <snapname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster snap delete snap1_GMT-2016.07.29-13.05.46
# gluster snap delete snap1_GMT-2016.07.29-13.05.46 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1_GMT-2016.07.29-13.05.46: snap removed successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about managing snapshots, refer https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Managing_Snapshots. - You can set up Container-Native Storage volumes for geo-replication to a non-Container-Native Storage remote site. Geo-replication uses a master–slave model. Here, the Container-Native Storage volume acts as the master volume. To set up geo-replication, you must run the geo-replication commands on gluster pods. To enter the gluster pod shell, execute the following command:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about setting up geo-replication, refer https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/chap-managing_geo-replication. - Brick multiplexing is a feature that allows including multiple bricks into one process. This reduces resource consumption, allowing you to run more bricks than earlier with the same memory consumption.Brick multiplexing is enabled by default from Container-Native Storage 3.6. If you want to turn it off, execute the following command:
gluster volume set all cluster.brick-multiplex off
# gluster volume set all cluster.brick-multiplex off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
auto_unmount
option in glusterfs libfuse, when enabled, ensures that the file system is unmounted at FUSE server termination by running a separate monitor process that performs the unmount.The GlusterFS plugin in Openshift enables theauto_unmount
option for gluster mounts.
Chapter 12. Managing Clusters Copy linkLink copied to clipboard!
12.1. Increasing Storage Capacity Copy linkLink copied to clipboard!
- Adding devices
- Increasing cluster size
- Adding an entirely new cluster.
12.1.1. Adding New Devices Copy linkLink copied to clipboard!
12.1.1.1. Using Heketi CLI Copy linkLink copied to clipboard!
/dev/sde
to node d6f2c22f2757bf67b1486d868dcb7794
:
heketi-cli device add --name=/dev/sde --node=d6f2c22f2757bf67b1486d868dcb7794
# heketi-cli device add --name=/dev/sde --node=d6f2c22f2757bf67b1486d868dcb7794
OUTPUT:
Device added successfully
12.1.1.2. Updating Topology File Copy linkLink copied to clipboard!
/dev/sde
drive added to the node:
12.1.2. Increasing Cluster Size Copy linkLink copied to clipboard!
Note
12.1.2.1. Using Heketi CLI Copy linkLink copied to clipboard!
zone 1
to 597fceb5d6c876b899e48f599b988f54
cluster using the CLI:
/dev/sdb
and /dev/sdc
devices for 095d5f26b56dc6c64564a9bc17338cbf
node:
12.1.2.2. Updating Topology File Copy linkLink copied to clipboard!
after
the existing ones so that the Heketi CLI identifies on which cluster this new node should be part of.
12.1.3. Adding a New Cluster Copy linkLink copied to clipboard!
- Adding a new cluster to the existing Container-Native Storage
- Adding another Container-Native Storage cluster in a new project
12.1.3.1. Adding a New Cluster to the Existing Container-Native Storage Copy linkLink copied to clipboard!
- Verify that Container-Native Storage is deployed and working as expected in the existing project by executing the following command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 3 3 3 storagenode=glusterfs 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the label for each node, where the Red Hat Gluster Storage pods are to be added for the new cluster to start by executing the following command:
oc label node <NODE_NAME> storagenode=<node_label>
# oc label node <NODE_NAME> storagenode=<node_label>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- NODE_NAME: is the name of the newly created node
- node_label: The name that is used in the existing deamonSet.
For example:oc label node 192.168.90.3 storagenode=glusterfs
# oc label node 192.168.90.3 storagenode=glusterfs node "192.168.90.3" labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if the Red Hat Gluster Storage pods are running by executing the folowing command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 6 6 6 storagenode=glusterfs 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each. - For the existing cluster, heketi-cli will be available to load the new topology. Run the command to add the new topology to heketi:
heketi-cli topology load --json=<topology file path>
# heketi-cli topology load --json=<topology file path>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3.2. Adding Another Container-Native Storage Cluster in a New Project Copy linkLink copied to clipboard!
Note
- Create a new project by executing the following command:
oc new-project <new_project_name>
# oc new-project <new_project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc new-project storage-project-2
# oc new-project storage-project-2 Now using project "storage-project-2" on server "https://master.example.com:8443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
oadm policy add-scc-to-user privileged -z storage-project-2 oadm policy add-scc-to-user privileged -z default
# oadm policy add-scc-to-user privileged -z storage-project-2 # oadm policy add-scc-to-user privileged -z default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each. - Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -n <namespace> --daemonset-label <NODE_LABEL> -g topology.json
# cns-deploy -n <namespace> --daemonset-label <NODE_LABEL> -g topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that Container-Native Storage is deployed and working as expected in the new project with the new daemonSet label by executing the following command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 3 3 3 storagenode=glusterfs2 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2. Reducing Storage Capacity Copy linkLink copied to clipboard!
Note
- The IDs can be retrieved by executing the heketi-cli topology info command.
heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
heketidbstorage
volume cannot be deleted as it contains the heketi database.
12.2.1. Deleting Volumes Copy linkLink copied to clipboard!
heketi-cli volume delete <volume_id>
# heketi-cli volume delete <volume_id>
heketi-cli volume delete 12b2590191f571be9e896c7a483953c3 Volume 12b2590191f571be9e896c7a483953c3 deleted
heketi-cli volume delete 12b2590191f571be9e896c7a483953c3
Volume 12b2590191f571be9e896c7a483953c3 deleted
12.2.2. Deleting Device Copy linkLink copied to clipboard!
12.2.2.1. Disabling and Enabling a Device Copy linkLink copied to clipboard!
heketi-cli device disable <device_id>
# heketi-cli device disable <device_id>
heketi-cli device disable f53b13b9de1b5125691ee77db8bb47f4
# heketi-cli device disable f53b13b9de1b5125691ee77db8bb47f4
Device f53b13b9de1b5125691ee77db8bb47f4 is now offline
heketi-cli device enable <device_id>
# heketi-cli device enable <device_id>
heketi-cli device enable f53b13b9de1b5125691ee77db8bb47f4
# heketi-cli device enable f53b13b9de1b5125691ee77db8bb47f4
Device f53b13b9de1b5125691ee77db8bb47f4 is now online
12.2.2.2. Removing and Deleting the Device Copy linkLink copied to clipboard!
- Remove device using the following command:
heketi-cli device remove <device_id>
# heketi-cli device remove <device_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli device remove e9ef1d9043ed3898227143add599e1f9 Device e9ef1d9043ed3898227143add599e1f9 is now removed
heketi-cli device remove e9ef1d9043ed3898227143add599e1f9 Device e9ef1d9043ed3898227143add599e1f9 is now removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the device using the following command:
heketi-cli device delete <device_id>
# heketi-cli device delete <device_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only way to reuse a deleted device is by adding the device to heketi's topology again.
12.2.2.3. Replacing a Device Copy linkLink copied to clipboard!
- Locate the device that has failed using the following command:
heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example below illustrates the sequence of operations that are required to replace a failed device. The example uses device IDa811261864ee190941b17c72809a5001
which belongs to node with id8faade64a9c8669de204b66bc083b10das
. - Add a new device preferably to the same node as the device being replaced.
heketi-cli device add --name /dev/vdd --node 8faade64a9c8669de204b66bc083b10d
# heketi-cli device add --name /dev/vdd --node 8faade64a9c8669de204b66bc083b10d Device added successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable the failed device.
heketi-cli device disable a811261864ee190941b17c72809a5001
# heketi-cli device disable a811261864ee190941b17c72809a5001 Device a811261864ee190941b17c72809a5001 is now offline
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the failed device.
heketi-cli device remove a811261864ee190941b17c72809a5001
# heketi-cli device remove a811261864ee190941b17c72809a5001 Device a811261864ee190941b17c72809a5001 is now removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow At this stage, the bricks are migrated from the failed device. Heketi chooses a suitable device based on the brick allocation algorithm. As a result, there is a possibility that all the bricks might not be migrated to the new added device. - Delete the failed device.
heketi-cli device delete a811261864ee190941b17c72809a5001
# heketi-cli device delete a811261864ee190941b17c72809a5001 Device a811261864ee190941b17c72809a5001 deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Before repeating the above sequence of steps on another device, you must wait for the self-heal operation to complete. You can verify that the self-heal operation completed when the Number of entries value returns a 0 value.
oc rsh <any_gluster_pod_name>
# oc rsh <any_gluster_pod_name> for each in $(gluster volume list) ; do gluster vol heal $each info | grep "Number of entries:" ; done Number of entries: 0 Number of entries: 0 Number of entries: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Deleting Node Copy linkLink copied to clipboard!
12.2.3.1. Disabling and Enabling a Node Copy linkLink copied to clipboard!
heketi-cli node disable <node_id>
# heketi-cli node disable <node_id>
heketi-cli node disable 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc is now offline
heketi-cli node disable 5f0af88b968ed1f01bf959fe4fe804dc
Node 5f0af88b968ed1f01bf959fe4fe804dc is now offline
heketi-cli node enable <node_id>
# heketi-cli node enable <node_id>
heketi-cli node enable 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc is now online
heketi-cli node enable 5f0af88b968ed1f01bf959fe4fe804dc
Node 5f0af88b968ed1f01bf959fe4fe804dc is now online
12.2.3.2. Removing and Deleting the Node Copy linkLink copied to clipboard!
- To remove the node execute the following command:
heketi-cli node remove <node_id>
# heketi-cli node remove <node_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli node remove 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc is now removed
heketi-cli node remove 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc is now removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the devices associated with the node by executing the following command as the nodes that have devices associated with it cannot be deleted:
heketi-cli device delete <device_id>
# heketi-cli device delete <device_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the command for every device on the node. - Delete the node using the following command:
heketi-cli node delete <node_id>
# heketi-cli node delete <node_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli node delete 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc deleted
heketi-cli node delete 5f0af88b968ed1f01bf959fe4fe804dc Node 5f0af88b968ed1f01bf959fe4fe804dc deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deleting the node deletes the node from the heketi topology. The only way to reuse a deleted node is by adding the node to heketi's topology again
12.2.3.3. Replacing a Node Copy linkLink copied to clipboard!
- Locate the node that has failed using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example below illustrates the sequence of operations that are required to replace a failed node. The example uses node ID 8faade64a9c8669de204b66bc083b10d. - Add a new node, preferably that has the same devices as the node being replaced.
heketi-cli node add --zone=1 --cluster=597fceb5d6c876b899e48f599b988f54 --management-host-name=node4.example.com --storage-host-name=192.168.10.104 heketi-cli device add --name /dev/vdd --node 8faade64a9c8669de204b66bc083b10d
# heketi-cli node add --zone=1 --cluster=597fceb5d6c876b899e48f599b988f54 --management-host-name=node4.example.com --storage-host-name=192.168.10.104 # heketi-cli device add --name /dev/vdd --node 8faade64a9c8669de204b66bc083b10d Node and device added successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable the failed node.
heketi-cli node disable 8faade64a9c8669de204b66bc083b10d
# heketi-cli node disable 8faade64a9c8669de204b66bc083b10d Node 8faade64a9c8669de204b66bc083b10d is now offline
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove the failed node.
heketi-cli node remove 8faade64a9c8669de204b66bc083b10d
# heketi-cli node remove 8faade64a9c8669de204b66bc083b10d Node 8faade64a9c8669de204b66bc083b10d is now removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow At this stage, the bricks are migrated from the failed node. Heketi chooses a suitable device based on the brick allocation algorithm. - Delete the devices associated with the node by executing the following command as the nodes that have devices associated with it cannot be deleted:
heketi-cli device delete <device_id>
# heketi-cli device delete <device_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
heketi-cli device delete 56912a57287d07fad0651ba0003cf9aa Device 56912a57287d07fad0651ba0003cf9aa deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the command for every device on the node. - Delete the failed node.
heketi-cli node delete 8faade64a9c8669de204b66bc083b10d
# heketi-cli node delete 8faade64a9c8669de204b66bc083b10d Node 8faade64a9c8669de204b66bc083b10d deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.4. Deleting Clusters Copy linkLink copied to clipboard!
heketi-cli cluster delete <cluster_id>
# heketi-cli cluster delete <cluster_id>
heketi-cli cluster delete 0e949d91c608d13fd3fc4e96f798a5b1 Cluster 0e949d91c608d13fd3fc4e96f798a5b1 deleted
heketi-cli cluster delete 0e949d91c608d13fd3fc4e96f798a5b1
Cluster 0e949d91c608d13fd3fc4e96f798a5b1 deleted
Chapter 13. Upgrading your Container-Native Storage Environment Copy linkLink copied to clipboard!
13.1. Prerequisites Copy linkLink copied to clipboard!
13.2. Upgrading cns-deploy and Heketi Server Copy linkLink copied to clipboard!
- Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y yum update heketi-client -y
# yum update cns-deploy -y # yum update heketi-client -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Backup the Heketi database file
oc rsh <heketi_pod_name> cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` exit
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the heketi template
oc delete templates heketi
# oc delete templates heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to install the heketi template:
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to grant the heketi Service Account the neccessary privileges:
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to generate a new heketi configuration file:
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see Section 9.2, “Block Storage”). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.template
toheketi.json
in the current directory and edit the new file directly, replacing each "${VARIABLE}
" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
- Execute the following command to create a secret to hold the configuration file:
oc create secret generic heketi-config-secret --from-file=heketi.json
# oc create secret generic heketi-config-secret --from-file=heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3. Upgrading the Red Hat Gluster Storage Pods Copy linkLink copied to clipboard!
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=false
option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs --cascade=false
# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs template:
oc delete templates glusterfs
# oc delete templates glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfs
label, then proceed with the next step. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> storagenode=glusterfs
# oc label nodes <node name> storagenode=glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to register new gluster template:
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to start the gluster DeamonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to obtain the volume names:
gluster volume list
# gluster volume list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command on each volume to check the self-heal status:
gluster volume heal <volname> info
# gluster volume heal <volname> info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command:
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31101 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31101
# gluster volume set all cluster.op-version 31101
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- From Container-Native Storage 3.6, dynamically provisioning volumes for block storage is supported. Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption, and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.5 to Container-Native Storage 3.6, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Container-Native Storage is under technology preview. To enable S3 compatible object store, refer Chapter 18, S3 Compatible Object Store in a Container-Native Storage Environment.
Chapter 14. Upgrading Your Container-Ready Storage Environment Copy linkLink copied to clipboard!
14.1. Prerequisites Copy linkLink copied to clipboard!
- If Heketi is running as a standalone service in one of the Red Hat Gluster Storage nodes, then ensure to open the port for Heketi. By default the port number for Heketi is 8080. To open this port execute the following command on the node where Heketi is running:
firewall-cmd --zone=zone_name --add-port=8080/tcp firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Heketi is configured to listen on a different port, then change the port number in the command accordingly.
14.2. Upgrading Container-Ready Storage Copy linkLink copied to clipboard!
- Upgrade the Red Hat Gluster Storage cluster. Refer In-Service Software Upgrade.
- Upgrade Heketi by executing the following commands on the Red Hat Gluster Storage node where Heketi is running::
- Backup the Heketi database file
cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
# cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update Heketi by executing the following command in one of the Red Hat Gluster Storage nodes where Heketi is running:
yum update heketi
# yum update heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To use gluster block, add the following two parameters to the
glusterfs
section in the heketi configuration file at /etc/heketi/heketi.JSON:auto_create_block_hosting_volume block_hosting_volume_size
auto_create_block_hosting_volume block_hosting_volume_size
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:auto_create_block_hosting_volume
: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value totrue
.block_hosting_volume_size
: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Heketi service:
systemctl restart heketi
# systemctl restart heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to install gluster block:
yum install gluster-block
# yum install gluster-block
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the gluster block service:
systemctl enable gluster-blockd systemctl start gluster-blockd
# systemctl enable gluster-blockd # systemctl start gluster-blockd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to update the heketi client and cns-deploy packages
yum install cns-deploy -y yum update cns-deploy -y yum update heketi-client -y
# yum install cns-deploy -y # yum update cns-deploy -y # yum update heketi-client -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Support for S3 compatible Object Store is under technology preview. To enable S3 compatible object store, refer Chapter 18, S3 Compatible Object Store in a Container-Native Storage Environment.
Chapter 15. Troubleshooting Copy linkLink copied to clipboard!
- What to do if a Container-Native Storage node FailsIf a Container-Native Storage node fails, and you want to delete it, then, disable the node before deleting it. For more information see, Section 12.2.3, “Deleting Node”.If a Container-Native Storage node fails and you want to replace it, refer Section 12.2.3.3, “Replacing a Node”.
- What to do if a Container-Native Storage device failsIf a Container-Native Storage device fails, and you want to delete it, then, disable the device before deleting it. For more information see, Section 12.2.2, “Deleting Device ”.If a Container-Native Storage device fails, and you want to replace it, refer Section 12.2.2.3, “Replacing a Device”.
- What to do if Container-Native Storage volumes require more capacity:You can increase the storage capacity by either adding devices, increasing the cluster size, or adding an entirely new cluster. For more information see Section 12.1, “Increasing Storage Capacity”.
- How to upgrade Openshift when Container-Native Storage is installedTo upgrade Openshift Container Platform refer, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.6/html/installation_and_configuration/upgrading-a-cluster.
- Viewing Log Files
- Viewing Red Hat Gluster Storage Container LogsDebugging information related to Red Hat Gluster Storage containers is stored on the host where the containers are started. Specifically, the logs and configuration files can be found at the following locations on the openshift nodes where the Red Hat Gluster Storage server containers run:
- /etc/glusterfs
- /var/lib/glusterd
- /var/log/glusterfs
- Viewing Heketi LogsDebugging information related to Heketi is stored locally in the container or in the persisted volume that is provided to Heketi container.You can obtain logs for Heketi by running the
docker logs container-id
command on the openshift node where the container is being run.
- Heketi command returns with no error or empty error like Error
Sometimes, running heketi-cli command returns with no error or empty error like Error. It is mostly due to heketi server not properly configured. You must first ping to validate that the Heketi server is available and later verify with a curl command and /hello endpoint.
- Heketi reports an error while loading the topology file
Running heketi-cli reports : Error "Unable to open topology file" error while loading the topology file. This could be due to the use of old syntax of single hyphen (-) as prefix for json option. You must use the new syntax of double hyphens and reload the topology file.
- cURL command to heketi server fails or does not respond
If the router or heketi is not configured properly, error messages from the heketi may not be clear. To troubleshoot, ping the heketi service using the endpoint and also using the IP address. If ping by the IP address succeeds and ping by the endpoint fails, it indicates a router configuration error.
After the router is setup properly, run a simple curl command like the following:curl http://deploy-heketi-storage-project.cloudapps.mystorage.com/hello
# curl http://deploy-heketi-storage-project.cloudapps.mystorage.com/hello
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If heketi is configured correctly, a welcome message from heketi is displayed. If not, check the heketi configuration. - Heketi fails to start when Red Hat Gluster Storage volume is used to store heketi.db file
Sometimes Heketi fails to start when Red Hat Gluster Storage volume is used to store heketi.db and reports the following error:
[heketi] INFO 2016/06/23 08:33:47 Loaded kubernetes executor [heketi] ERROR 2016/06/23 08:33:47 /src/github.com/heketi/heketi/apps/glusterfs/app.go:149: write /var/lib/heketi/heketi.db: read-only file system ERROR: Unable to start application
[heketi] INFO 2016/06/23 08:33:47 Loaded kubernetes executor [heketi] ERROR 2016/06/23 08:33:47 /src/github.com/heketi/heketi/apps/glusterfs/app.go:149: write /var/lib/heketi/heketi.db: read-only file system ERROR: Unable to start application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The read-only file system error as shown above could be seen while using a Red Hat Gluster Storage volume as backend. This could be when the quorum is lost for the Red Hat Gluster Storage volume. In a replica-3 volume, this would be seen if 2 of the 3 bricks are down. You must ensure the quorum is met for heketi gluster volume and it is able to write to heketi.db file again.Even if you see a different error, it is a recommended practice to check if the Red Hat Gluster Storage volume serving heketi.db file is available or not. Access deny to heketi.db file is the most common reason for it to not start.
Chapter 16. Uninstalling Containerized Red Hat Gluster Storage Copy linkLink copied to clipboard!
- Cleanup Red Hat Gluster Storage using Heketi
- Remove any containers using the persistent volume claim from Red Hat Gluster Storage.
- Remove the appropriate persistent volume claim and persistent volume:
oc delete pvc <pvc_name> oc delete pv <pv_name>
# oc delete pvc <pvc_name> # oc delete pv <pv_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Remove all OpenShift objects
- Delete all project specific pods, services, routes, and deployment configurations:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until all the pods have been terminated. - Check and delete the gluster service and endpoints from the projects that required a persistent storage:
oc get endpoints,service oc delete endpoints <glusterfs-endpoint-name> oc delete service <glusterfs-service-name>
# oc get endpoints,service # oc delete endpoints <glusterfs-endpoint-name> # oc delete service <glusterfs-service-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Cleanup the persistent directories
- To cleanup the persistent directories execute the following command on each node as a root user:
rm -rf /var/lib/heketi \ /etc/glusterfs \ /var/lib/glusterd \ /var/log/glusterfs
# rm -rf /var/lib/heketi \ /etc/glusterfs \ /var/lib/glusterd \ /var/log/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Force cleanup the disks
- Execute the following command to cleanup the disks:
wipefs -a -f /dev/<disk-id>
# wipefs -a -f /dev/<disk-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 17. Enabling Encryption Copy linkLink copied to clipboard!
- I/O encryption - encryption of the I/O connections between the Red Hat Gluster Storage clients and servers.
- Management encryption - encryption of the management (glusterd) connections within a trusted storage pool.
17.1. Prerequisites Copy linkLink copied to clipboard!
Note
- Ensure to perform the steps on all the OpenShift nodes except master.
- All the Red Hat Gluster Storage volumes are mounted on the OpenShift nodes and then bind mounted to the application pods. Hence, it is not required to perform any encryption related operations specifically on the application pods.
17.2. Enabling Encryption for a New Container-Native Storage Setup Copy linkLink copied to clipboard!
17.2.1. Enabling Management Encryption Copy linkLink copied to clipboard!
Perform the following on all the server, ie, the OpenShift nodes on which Red Hat Gluster Storage pods are running.
- Create the /var/lib/glusterd/secure-access file.
touch /var/lib/glusterd/secure-access
# touch /var/lib/glusterd/secure-access
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Perform the following on the clients, ie. on all the remaining OpenShift nodes on which Red Hat Gluster Storage is not running.
- Create the /var/lib/glusterd/secure-access file.
touch /var/lib/glusterd/secure-access
# touch /var/lib/glusterd/secure-access
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
17.2.2. Enabling I/O encryption for a Volume Copy linkLink copied to clipboard!
Note
- Ensure Container-Native Storage is deployed before proceeding with further steps. For more information see, Section 8.2, “ Deploying Containerized Red Hat Gluster Storage Solutions”
- You can either create a statically provisioned volume or a dynamically provisioned volume. For more information about static provisioning of volumes, see Section 9.1.1, “Static Provisioning of Volumes” . For more information about dynamic provisioning of volumes, see Section 9.1.2, “Dynamic Provisioning of Volumes”
Note
To enable encryption during the creation of statically provisioned volume, execute the following command:heketi-cli volume create --size=100 --gluster-volume-options="client.ssl on","server.ssl on"
# heketi-cli volume create --size=100 --gluster-volume-options="client.ssl on","server.ssl on"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the volume by executing the following command:
oc rsh <gluster_pod_name> gluster volume stop VOLNAME
# oc rsh <gluster_pod_name> gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The gluster pod name is the name of one of the Red Hat Gluster Storage pods of the trusted storage pool to which the volume belongs.Note
To get the VOLNAME, execute the following command:oc describe pv <pv_name>
# oc describe pv <pv_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The VOLNAME is the value of "path" in the above output. - Set the list of common names of all the servers to access the volume. Ensure to include the common names of clients which will be allowed to access the volume.
oc rsh <gluster_pod_name> gluster volume set VOLNAME auth.ssl-allow 'server1,server2,server3,client1,client2,client3'
# oc rsh <gluster_pod_name> gluster volume set VOLNAME auth.ssl-allow 'server1,server2,server3,client1,client2,client3'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you set auth.ssl-allow option with * as value, any TLS authenticated clients can mount and access the volume from the application side. Hence, you set the option's value to * or provide common names of clients as well as the nodes in the trusted storage pool. - Enable the client.ssl and server.ssl options on the volume using the heketi-cli.
oc rsh <gluster_pod_name> gluster volume set VOLNAME client.ssl on oc rsh <gluster_pod_name> gluster volume set VOLNAME server.ssl on
# oc rsh <gluster_pod_name> gluster volume set VOLNAME client.ssl on # oc rsh <gluster_pod_name> gluster volume set VOLNAME server.ssl on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the volume.
oc rsh <gluster_pod_name> gluster volume start VOLNAME
# oc rsh <gluster_pod_name> gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Enabling Encryption for an Existing Container-Native Storage Setup Copy linkLink copied to clipboard!
17.3.1. Enabling I/O encryption for a Volume Copy linkLink copied to clipboard!
Note
- Stop all the application pods that have the Red Hat Gluster Storage volumes.
- Stop the volume.
oc rsh <gluster_pod_name> gluster volume stop VOLNAME
# oc rsh <gluster_pod_name> gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The gluster pod name is the name of one of the Red Hat Gluster Storage pods of the trusted storage pool to which the volume belongs. - Set the list of common names for clients allowed to access the volume. Be sure to include the common names of all the servers.
oc rsh <gluster_pod_name> gluster volume set VOLNAME auth.ssl-allow 'server1,server2,server3,client1,client2,client3'
# oc rsh <gluster_pod_name> gluster volume set VOLNAME auth.ssl-allow 'server1,server2,server3,client1,client2,client3'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you set auth.ssl-allow option with * as value, any TLS authenticated clients can mount and access the volume from the application side. Hence, you set the option's value to * or provide common names of clients as well as the nodes in the trusted storage pool. - Enable client.ssl and server.ssl on the volume using the heketi-cli.
heketi-cli volume create --size=100 --gluster-volume-options="client.ssl on","server.ssl on"
# heketi-cli volume create --size=100 --gluster-volume-options="client.ssl on","server.ssl on"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the volume.
oc rsh <gluster_pod_name> gluster volume start VOLNAME
# oc rsh <gluster_pod_name> gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the application pods to use the I/O encrypted Red Hat Gluster Storage volumes.
17.3.2. Enabling Management Encryption Copy linkLink copied to clipboard!
- Stop all the application pods that have the Red Hat Gluster Storage volumes.
- Stop all the volumes.
oc rsh <gluster_pod_name> gluster volume stop VOLNAME
# oc rsh <gluster_pod_name> gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the Red Hat Gluster Storage pods.
oc delete daemonset glusterfs
# oc delete daemonset glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On deletion of daemon set the pods go down. To verify if the pods are down, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the /var/lib/glusterd/secure-access file on all OpenShift nodes.
touch /var/lib/glusterd/secure-access
# touch /var/lib/glusterd/secure-access
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the Red Hat Gluster Storage deamonset by executing the following command:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On creation of daemon set the pods are started. To verify if the pods are started, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start all the volumes.
oc rsh <gluster_pod_name> gluster volume start VOLNAME
# oc rsh <gluster_pod_name> gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the application pods to use the management encrypted Red Hat Gluster Storage.
17.4. Disabling Encryption Copy linkLink copied to clipboard!
- Disabling I/O Encryption for a Volume
- Disabling Management Encryption
17.4.1. Disabling I/O Encryption for all the Volumes Copy linkLink copied to clipboard!
Note
- Stop all the application pods that have the Red Hat Gluster Storage volumes.
- Stop all the volumes.
oc rsh <gluster_pod_name> gluster volume stop VOLNAME
# oc rsh <gluster_pod_name> gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reset all the encryption options for a volume:
oc rsh <gluster_pod_name> gluster volume reset VOLNAME auth.ssl-allow oc rsh <gluster_pod_name> gluster volume reset VOLNAME client.ssl oc rsh <gluster_pod_name> gluster volume reset VOLNAME server.ssl
# oc rsh <gluster_pod_name> gluster volume reset VOLNAME auth.ssl-allow # oc rsh <gluster_pod_name> gluster volume reset VOLNAME client.ssl # oc rsh <gluster_pod_name> gluster volume reset VOLNAME server.ssl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the files that were used for network encryption using the following command on all the OpenShift nodes:
rm /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.key /etc/ssl/glusterfs.ca
# rm /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.key /etc/ssl/glusterfs.ca
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the Red Hat Gluster Storage pods.
oc delete daemonset glusterfs
# oc delete daemonset glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On deletion of daemon set the pods go down. To verify if the pods are down, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the Red Hat Gluster Storage deamonset by executing the following command:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On creation of daemon set the pods are started. To verify if the pods are started, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the volume.
oc rsh <gluster_pod_name> gluster volume start VOLNAME
# oc rsh <gluster_pod_name> gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the application pods to use the I/O encrypted Red Hat Gluster Storage volumes.
17.4.2. Disabling Management Encryption Copy linkLink copied to clipboard!
- Stop all the application pods that have the Red Hat Gluster Storage volumes.
- Stop all the volumes.
oc rsh <gluster_pod_name> gluster volume stop VOLNAME
# oc rsh <gluster_pod_name> gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the Red Hat Gluster Storage pods.
oc delete daemonset glusterfs
# oc delete daemonset glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On deletion of daemon set the pods go down. To verify if the pods are down, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the /var/lib/glusterd/secure-access file on all OpenShift nodes to disable management encryption.
rm /var/lib/glusterd/secure-access
# rm /var/lib/glusterd/secure-access
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the files that were used for network encryption using the following command on all the OpenShift nodes:
rm /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.key /etc/ssl/glusterfs.ca
# rm /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.key /etc/ssl/glusterfs.ca
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the Red Hat Gluster Storage deamonset by executing the following command:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On creation of daemon set the pods are started. To verify if the pods are started, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start all the volumes.
oc rsh <gluster_pod_name> gluster volume start VOLNAME
# oc rsh <gluster_pod_name> gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the application pods to use the management encrypted Red Hat Gluster Storage.
Chapter 18. S3 Compatible Object Store in a Container-Native Storage Environment Copy linkLink copied to clipboard!
Important
18.1. Prerequisites Copy linkLink copied to clipboard!
- OpenShift setup must be up with master and nodes ready. For more information see Section 8.1, “Preparing the Red Hat OpenShift Container Platform Cluster”
- The cns-deploy tool is run and Heketi service is ready. For more information see, Section 8.2, “ Deploying Containerized Red Hat Gluster Storage Solutions”
18.2. Setting up S3 Compatible Object Store for Container-Native Storage Copy linkLink copied to clipboard!
- (Optional): If you want to create a secret for heketi, then execute the following command:
oc create secret generic heketi-${NAMESPACE}-admin-secret
# oc create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create secret generic heketi-storage-project-admin-secret
# oc create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to label the secret:
oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
# oc label --overwrite secret heketi-${NAMESPACE}-admin-secret glusterfs=s3-heketi-${NAMESPACE}-admin-secret gluster-s3=heketi-${NAMESPACE}-admin-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc label --overwrite secret heketi-storage-project-admin-secret
# oc label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Create a GlusterFS StorageClass file. Use the
HEKETI_URL
andNAMESPACE
from the current setup and set aSTORAGE_CLASS
name.sed -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -
# sed -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f - storageclass "gluster-s3-store" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the Persistent Volume Claims using the storage class.
sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f -
# sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use theSTORAGE_CLASS
created from the previous step. Modify theVOLUME_CAPACITY
as per the environment requirements. Wait till the PVC is bound. Verify the same using the following command:oc get pvc
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-s3-claim Bound pvc-0b7f75ef-9920-11e7-9309-00151e000016 2Gi RWX 2m gluster-s3-meta-claim Bound pvc-0b87a698-9920-11e7-9309-00151e000016 1Gi RWX 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the glusters3 object storage service using the template:
Note
Set theS3_ACCOUNT
name,S3_USER
name, andS3_PASSWORD
.PVC
andMETA_PVC
are obtained from the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if the S3 pod is up:
oc get route
# oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gluster-S3-route gluster-s3-route-storage-project.cloudapps.mystorage.com ... 1 more gluster-s3-service <all> None heketi heketi-storage-project.cloudapps.mystorage.com ... 1 more heketi <all>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3. Object Operations Copy linkLink copied to clipboard!
- Get the URL of the route which provides S3 OS
s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}')
# s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure to download the s3curl tool from https://aws.amazon.com/code/128. This tool will be used for verifying the object operations.- s3curl.pl requires Digest::HMAC_SHA1 and Digest::MD5. Install the perl-Digest-HMAC package to get this.
- Update the s3curl.pl perl script with glusters3object url which was retreived:For example:
my @endpoints = ( 'glusters3object-storage-project.cloudapps.mystorage.com');
my @endpoints = ( 'glusters3object-storage-project.cloudapps.mystorage.com');
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To perform
PUT
operation of the bucket:s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put /dev/null -- -k -v http://$s3_storage_url/bucket1
s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put /dev/null -- -k -v http://$s3_storage_url/bucket1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To perform
PUT
operation of the object inside the bucket:s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put my_object.jpg -- -k -v -s http://$s3_storage_url/bucket1/my_object.jpg
s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put my_object.jpg -- -k -v -s http://$s3_storage_url/bucket1/my_object.jpg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify listing of objects in the bucket:
s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" -- -k -v -s http://$s3_storage_url/bucket1/
s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" -- -k -v -s http://$s3_storage_url/bucket1/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Appendix A. Manual Deployment Copy linkLink copied to clipboard!
A.1. Installing the Templates Copy linkLink copied to clipboard!
- Use the newly created containerized Red Hat Gluster Storage project:
oc project project_name
# oc project project_name
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc project storage-project
# oc project storage-project Using project "storage-project" on server "https://master.example.com:8443".
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to install the templates:
oc create -f /usr/share/heketi/templates/deploy-heketi-template.yaml
# oc create -f /usr/share/heketi/templates/deploy-heketi-template.yaml template "deploy-heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template "glusterfs" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f /usr/share/heketi/templates/heketi-service-account.yaml
# oc create -f /usr/share/heketi/templates/heketi-service-account.yaml serviceaccount "heketi-service-account" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the templates are installed:
oc get templates
# oc get templates
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the serviceaccount is created:
oc get serviceaccount heketi-service-account
# oc get serviceaccount heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get serviceaccount heketi-service-account
# oc get serviceaccount heketi-service-account NAME SECRETS AGE heketi-service-account 2 7d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A.2. Deploying the Containers Copy linkLink copied to clipboard!
- List out the hostnames of the nodes on which the Red Hat Gluster Storage container has to be deployed:
oc get nodes
# oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to label all nodes that will run Red Hat Gluster Storage pods:
oc label node <NODENAME> storagenode=glusterfs
# oc label node <NODENAME> storagenode=glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc label nodes 192.168.90.3 storagenode=glusterfs
# oc label nodes 192.168.90.3 storagenode=glusterfs node "192.168.90.3" labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this command for every node that will be in the GlusterFS cluster.Verify the label has set properly by running the following command:oc get nodes --show-labels
# oc get nodes --show-labels 192.168.90.2 Ready 12d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.2,storagenode=glusterfs 192.168.90.3 Ready 12d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.3,storagenode=glusterfs 192.168.90.4 Ready 12d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.4,storagenode=glusterfs 192.168.90.5 Ready,SchedulingDisabled 12d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Red Hat Gluster Storage pods:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - daemonset "glusterfs" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This does not initialize the hardware or create trusted storage pools. That aspect will be taken care by heketi which is explained in the further steps. - Execute the following command to grant the heketi Service Account the neccessary privileges:
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy deploy-heketi:
oc process deploy-heketi | oc create -f -
# oc process deploy-heketi | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc process deploy-heketi | oc create -f -
# oc process deploy-heketi | oc create -f - service "deploy-heketi" created route "deploy-heketi" created deploymentconfig "deploy-heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A.3. Setting up the Heketi Server Copy linkLink copied to clipboard!
/usr/share/heketi/
directory.
node.hostnames.manage
section and node.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json
file only sets up 4 nodes with 8 drives each.
Important
- Execute the following command to check if the bootstrap container is running:
curl http://deploy-heketi-<project_name>.<sub-domain_name>/hello
# curl http://deploy-heketi-<project_name>.<sub-domain_name>/hello
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:curl http://deploy-heketi-storage-project.cloudapps.mystorage.com/hello
# curl http://deploy-heketi-storage-project.cloudapps.mystorage.com/hello Hello from Heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to load the topology file:
export HEKETI_CLI_SERVER=http://deploy-heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://deploy-heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://deploy-heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://deploy-heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow heketi-cli topology load --json=topology.json
# heketi-cli topology load --json=topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the topology is loaded:
heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the Heketi storage volume which will store the database on a reliable Red Hat Gluster Storage volume:
heketi-cli setup-openshift-heketi-storage
# heketi-cli setup-openshift-heketi-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:heketi-cli setup-openshift-heketi-storage
# heketi-cli setup-openshift-heketi-storage Saving heketi-storage.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a job which will copy the database from deploy-heketi bootstrap container to the volume.
oc create -f heketi-storage.json
# oc create -f heketi-storage.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f heketi-storage.json
# oc create -f heketi-storage.json secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the job has finished successfully:
oc get jobs
# oc get jobs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get jobs
# oc get jobs NAME DESIRED SUCCESSFUL AGE heketi-storage-copy-job 1 1 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to remove all resources used to bootstrap heketi:
oc delete all,job,template,secret --selector="deploy-heketi"
# oc delete all,job,template,secret --selector="deploy-heketi"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Appendix B. Cluster Administrator Setup Copy linkLink copied to clipboard!
Set up the authentication using AllowAll Authentication method.
/etc/origin/master/master-config.yaml
on the OpenShift master and change the value of DenyAllPasswordIdentityProvider
to AllowAllPasswordIdentityProvider
. Then restart the OpenShift master.
- Now that the authentication model has been setup, login as a user, for example admin/admin:
oc login openshift master e.g. https://1.1.1.1:8443 --username=admin --password=admin
# oc login openshift master e.g. https://1.1.1.1:8443 --username=admin --password=admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Grant the admin user account the
cluster-admin
role.oadm policy add-cluster-role-to-user cluster-admin admin
# oadm policy add-cluster-role-to-user cluster-admin admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Appendix C. Client Configuration using Port Forwarding Copy linkLink copied to clipboard!
- Obtain the Heketi service pod name by running the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To forward the port on your local system to the pod, execute the following command on another terminal of your local system:
oc port-forward <heketi pod name> 8080:8080
# oc port-forward <heketi pod name> 8080:8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On the original terminal execute the following command to test the communication with the server:
curl http://localhost:8080/hello
# curl http://localhost:8080/hello
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will forward the local port 8080 to the pod port 8080. - Setup the Heketi server environment variable by running the following command:
export HEKETI_CLI_SERVER=http://localhost:8080
# export HEKETI_CLI_SERVER=http://localhost:8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Get information from Heketi by running the following command:
heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Appendix D. Heketi CLI Commands Copy linkLink copied to clipboard!
- heketi-cli topology info
This command retreives information about the current Topology.
- heketi-cli cluster list
Lists the clusters managed by Heketi
For example:heketi-cli cluster list
# heketi-cli cluster list Clusters: 9460bbea6f6b1e4d833ae803816122c6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - heketi-cli cluster info <cluster_id>
Retrieves the information about the cluster.
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - heketi-cli node info <node_id>
Retrieves the information about the node.
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - heketi-cli volume list
Lists the volumes managed by Heketi
For example:heketi-cli volume list
# heketi-cli volume list Id:142e0ec4a4c1d1cc082071329a0911c6 Cluster:9460bbea6f6b1e4d833ae803816122c6 Name:heketidbstorage Id:638d0dc6b1c85f5eaf13bd5c7ed2ee2a Cluster:9460bbea6f6b1e4d833ae803816122c6 Name:scalevol-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
heketi-cli --help
# heketi-cli --help
- heketi-cli [flags]
- heketi-cli [command]
export HEKETI_CLI_SERVER=http://localhost:8080
# export HEKETI_CLI_SERVER=http://localhost:8080
heketi-cli volume list
# heketi-cli volume list
- cluster
Heketi cluster management
- device
Heketi device management
- setup-openshift-heketi-storage
Setup OpenShift/Kubernetes persistent storage for Heketi
- node
Heketi Node Management
- topology
Heketi Topology Management
- volume
Heketi Volume Management
Appendix E. Gluster Block Storage as Backend for Logging and Metrics Copy linkLink copied to clipboard!
Note
E.1. Prerequisites Copy linkLink copied to clipboard!
- In the storageclass file, check if the default storage class is set to the storage class of gluster block. For example:
oc get storageclass
# oc get storageclass NAME TYPE gluster-block gluster.org/glusterblock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the default is not set to
gluster-block
(or any other name that you have provided) then execute the following command. For example:oc patch storageclass gluster-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# oc patch storageclass gluster-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify:
oc get storageclass NAME TYPE gluster-block (default) gluster.org/glusterblock
oc get storageclass NAME TYPE gluster-block (default) gluster.org/glusterblock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
E.2. Enabling Gluster Block Storage as Backend for Logging Copy linkLink copied to clipboard!
- To enable logging in Openshift Container platform, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.6/html-single/installation_and_configuration/#install-config-aggregate-logging
- The
openshift_logging_es_pvc_dynamic
ansible variable has to be set to true.[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, a sample set of variables for openshift_logging_ are listed below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the Ansible playbook. For more information, see .https://access.redhat.com/documentation/en-us/openshift_container_platform/3.6/html-single/installation_and_configuration/#install-config-aggregate-logging
- To verify, execute the following command:
oc get pods -n openshift-logging
# oc get pods -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
E.3. Enabling Gluster Block Storage as Backend for Metrics Copy linkLink copied to clipboard!
Note
- To enable metrics in Openshift Container platform, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.6/html-single/installation_and_configuration/#install-config-cluster-metrics
- The
openshift_metrics_cassandra_storage_type
ansible variable should be set todynamic
:[OSEv3:vars]openshift_metrics_cassandra_storage_type=dynamic
[OSEv3:vars]openshift_metrics_cassandra_storage_type=dynamic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, a sample set of variables for openshift_metrics_ are listed below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the Ansible playbook. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.6/html-single/installation_and_configuration/#install-config-cluster-metrics.
- To verify, execute the following command:
oc get pods --n openshift-infra
# oc get pods --n openshift-infra
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It should list the following pods running:heapster-cassandra heapster-metrics hawkular-&*9
heapster-cassandra heapster-metrics hawkular-&*9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
E.4. Verifying if Gluster Block is Setup as Backend Copy linkLink copied to clipboard!
- To get an overiew of the infrastructure, execute the following command:
oc get pods -n logging -o jsonpath='{range .items[*].status.containerStatuses[*]}{"Name: "}{.name}{"\n "}{"Image: "}{.image}{"\n"}{" State: "}{.state}{"\n"}{end}'
# oc get pods -n logging -o jsonpath='{range .items[*].status.containerStatuses[*]}{"Name: "}{.name}{"\n "}{"Image: "}{.image}{"\n"}{" State: "}{.state}{"\n"}{end}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of all the persistent volume claims, execute the following command:
oc get pvc
# oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the pvc, execute the following command:
oc describe pvc <claim_name>
# oc describe pvc <claim_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the volume is mountable and that permissions allow read/write. Also, PVC claim name should match the dynamically provisioned gluster block storage class.
Appendix F. Known Issues Copy linkLink copied to clipboard!
- Volumes that were created using Container-Native Storage 3.5 or previous do not have the GID stored in heketi database. Hence, when a volume expansion is performed, new bricks do not get the group ID set on them which might lead to I/O errors.
- The following two lines might be repeatedly logged in the rhgs-server-docker container/gluster container logs.
[MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd. [socket.c:701:__socket_rwv] 0-nfs: readv on /var/run/gluster/1ab7d02f7e575c09b793c68ec2a478a5.socket failed (Invalid argument)
[MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd. [socket.c:701:__socket_rwv] 0-nfs: readv on /var/run/gluster/1ab7d02f7e575c09b793c68ec2a478a5.socket failed (Invalid argument)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These logs are added as glusterd is unable to start the NFS service. There is no functional impact as NFS export is not supported in Containerized Red Hat Gluster Storage.
Appendix G. Revision History Copy linkLink copied to clipboard!
Revision History | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Revision 1.0-17 | Thu Mar 01 2018 | |||||||||
| ||||||||||
Revision 1.0-16 | Fri Feb 16 2018 | |||||||||
| ||||||||||
Revision 1.0-15 | Wed Jan 31 2018 | |||||||||
| ||||||||||
Revision 1.0-14 | Fri Jan 05 2018 | |||||||||
| ||||||||||
Revision 1.0-13 | Tue Oct 10 2017 | |||||||||
| ||||||||||
Revision 1.0-12 | Tue Oct 10 2017 | |||||||||
| ||||||||||
Revision 1.0-4 | Mon Apr 03 2017 | |||||||||
|
Legal Notice
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.