OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Appendix A. Optional Deployment Method (with cns-deploy)
Following sections provides an optional method to deploy Red Hat Openshift Container Storage using cns-deploy.
CNS-deploy is deprecated and will not be supported in future Openshift Container Storage versions for new deployments.
A.1. Setting up Converged mode Link kopierenLink in die Zwischenablage kopiert!
The converged mode environment addresses the use-case where applications require both shared storage and the flexibility of a converged infrastructure with compute and storage instances being scheduled and run from the same set of hardware.
A.1.1. Configuring Port Access Link kopierenLink in die Zwischenablage kopiert!
On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to /etc/sysconfig/iptables in order to open the required ports:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-getting_started.
Execute the following command to reload the iptables:
systemctl reload iptables
# systemctl reload iptablesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on each node to verify if the iptables are updated:
iptables -L
# iptables -L
A.1.2. Enabling Kernel Modules Link kopierenLink in die Zwischenablage kopiert!
Before running the cns-deploy tool, you must ensure that the dm_thin_pool, dm_multipath, and target_core_user modules are loaded in the OpenShift Container Platform node. Execute the following commands only on Gluster nodes to verify if the modules are loaded:
lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_pool
lsmod | grep dm_multipath
# lsmod | grep dm_multipath
lsmod | grep target_core_user
# lsmod | grep target_core_user
If the modules are not loaded, then execute the following command to load the modules:
modprobe dm_thin_pool
# modprobe dm_thin_pool
modprobe dm_multipath
# modprobe dm_multipath
modprobe target_core_user
# modprobe target_core_user
To ensure these operations are persisted across reboots, create the following files and update each with the content as mentioned:
cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf
dm_thin_pool
cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf
dm_multipath
cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf
target_core_user
A.1.3. Starting and Enabling Services Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands to enable and run rpcbind on all the nodes hosting the gluster pod:
systemctl add-wants multi-user rpcbind.service systemctl enable rpcbind.service systemctl start rpcbind.service
# systemctl add-wants multi-user rpcbind.service
# systemctl enable rpcbind.service
# systemctl start rpcbind.service
Execute the following command to check the status of rpcbind
Next Step: Proceed to Section A.3, “Setting up the Environment” to prepare the environment for Red Hat Gluster Storage Container Converged in OpenShift.
To remove an installation of Red Hat Openshift Container Storage done using cns-deploy, run the cns-deploy --abort command. Use the -g option if Gluster is containerized.
When the pods are deleted, not all Gluster states are removed from the node. Therefore, you must also run rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd /var/log/glusterfs command on every node that was running a Gluster pod and also run wipefs -a <device> for every storage device that was consumed by Heketi. This erases all the remaining Gluster states from each node. You must be an administrator to run the device wiping command
A.2. Setting up Independent Mode Link kopierenLink in die Zwischenablage kopiert!
In an independent mode set-up, a dedicated Red Hat Gluster Storage cluster is available external to the OpenShift Container Platform. The storage is provisioned from the Red Hat Gluster Storage cluster.
A.2.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install) Link kopierenLink in die Zwischenablage kopiert!
Layered install involves installing Red Hat Gluster Storage over Red Hat Enterprise Linux.
It is recommended to create a separate /var partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Independent mode is supported only on Red Hat Enterprise Linux 7.
Register the System with Subscription Manager
Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:
subscription-manager list --available
# subscription-manager list --availableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach the
Red Hat Enterprise Linux ServerandRed Hat Gluster Storageentitlements to the system. Run the following command to attach the entitlements:subscription-manager attach --pool=[POOLID]
# subscription-manager attach --pool=[POOLID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Required Channels
For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7
Run the following commands to enable the repositories required to install Red Hat Gluster Storage
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:
yum repolist
# yum repolistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages
Ensure that all packages are up to date by running the following command.
yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Kernel Version Requirement
Independent mode requires the kernel-3.10.0-862.14.4.el7.x86_64 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:
rpm -q kernel
# rpm -q kernel kernel-3.10.0-862.14.4.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow uname -r
# uname -r 3.10.0-862.14.4.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf any kernel packages are updated, reboot the system with the following command.
+
shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:
yum install redhat-storage-server
# yum install redhat-storage-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable gluster-block execute the following command:
yum install gluster-block
# yum install gluster-block
Reboot
Reboot the system.
A.2.2. Configuring Port Access Link kopierenLink in die Zwischenablage kopiert!
This section provides information about the ports that must be open for the independent mode.
Red Hat Gluster Storage Server uses the listed ports. You must ensure that the firewall settings do not prevent access to these ports.
Execute the following commands to open the required ports for both runtime and permanent configurations on all Red Hat Gluster Storage nodes:
firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp --permanent
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example, the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
A.2.3. Enabling Kernel Modules Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands to enable kernel modules:
You must ensure that the dm_thin_pool and target_core_user modules are loaded in the Red Hat Gluster Storage nodes.
modprobe target_core_user
# modprobe target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow modprobe dm_thin_pool
# modprobe dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:
lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow lsmod | grep target_core_user
# lsmod | grep target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:
cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow You must ensure that the dm_multipath module is loaded on all OpenShift Container Platform nodes.
modprobe dm_multipath
# modprobe dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:
lsmod | grep dm_multipath
# lsmod | grep dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:
cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.2.4. Starting and Enabling Services Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands to start glusterd and gluster-blockd:
systemctl start sshd
# systemctl start sshd
systemctl enable sshd
# systemctl enable sshd
systemctl start glusterd
# systemctl start glusterd
systemctl enable glusterd
# systemctl enable glusterd
systemctl start gluster-blockd
# systemctl start gluster-blockd
systemctl enable gluster-blockd
# systemctl enable gluster-blockd
Next Step: Proceed to Section A.3, “Setting up the Environment” to prepare the environment for Red Hat Gluster Storage Container Converged in OpenShift.
A.3. Setting up the Environment Link kopierenLink in die Zwischenablage kopiert!
This chapter outlines the details for setting up the environment for Red Hat Openshift Container Platform.
A.3.1. Preparing the Red Hat OpenShift Container Platform Cluster Link kopierenLink in die Zwischenablage kopiert!
Execute the following steps to prepare the Red Hat OpenShift Container Platform cluster:
On the master or client, execute the following command to login as the cluster admin user:
oc login
# oc loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the master or client, execute the following command to create a project, which will contain all the containerized Red Hat Gluster Storage services:
oc new-project <project_name>
# oc new-project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc new-project storage-project
# oc new-project storage-project Now using project "storage-project" on server "https://master.example.com:8443"Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
oc adm policy add-scc-to-user privileged -z default
# oc adm policy add-scc-to-user privileged -z defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following steps on the master to set up the router:
NoteIf a router already exists, proceed to Step 5. To verify if the router is already deployed, execute the following command:
oc get dc --all-namespaces
# oc get dc --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list all routers in all namespaces execute the following command:
oc get dc --all-namespaces --selector=router=router
# oc get dc --all-namespaces --selector=router=router NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-storage-provisioner-dc 1 1 0 config heketi-storage 4 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to enable the deployment of the router:
oc adm policy add-scc-to-user privileged -z router
# oc adm policy add-scc-to-user privileged -z routerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the router:
oc adm router storage-project-router --replicas=1
# oc adm router storage-project-router --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the subdomain name in the config.yaml file located at /etc/origin/master/master-config.yaml.
For example:
subdomain: "cloudapps.mystorage.com"
subdomain: "cloudapps.mystorage.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#customizing-the-default-routing-subdomain.
For OpenShift Container Platform 3.7 and 3.9 execute the following command to restart the services:
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllersCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the router setup fails, use the port forward method as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Port_Fwding.
For more information regarding router setup, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/configuring_clusters/setting-up-a-router
Execute the following command to verify if the router is running:
oc get dc <_router_name_>
# oc get dc <_router_name_>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get dc storage-project-router
# oc get dc storage-project-router NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-storage-provisioner-dc 1 1 0 config heketi-storage 4 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure you do not edit the */etc/dnsmasq.conf *file until the router has started.
After the router is running, the client has to be setup to access the services in the OpenShift cluster. Execute the following steps on the client to set up the DNS.
Execute the following command to find the IP address of the router:
oc get pods -o wide --all-namespaces | grep router
# oc get pods -o wide --all-namespaces | grep router storage-project storage-project-router-1-cm874 1/1 Running 119d 10.70.43.132 dhcp43-132.lab.eng.blr.redhat.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the /etc/dnsmasq.conf file and add the following line to the file:
address=/.cloudapps.mystorage.com/<Router_IP_Address>
address=/.cloudapps.mystorage.com/<Router_IP_Address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, Router_IP_Address is the IP address of the node where the router is running.
Restart the
dnsmasqservice by executing the following command:systemctl restart dnsmasq
# systemctl restart dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit /etc/resolv.conf and add the following line:
nameserver 127.0.0.1
nameserver 127.0.0.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information regarding setting up the DNS, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/installing_clusters/install-config-install-prerequisites#prereq-dns.
A.3.2. Deploying Containerized Red Hat Gluster Storage Solutions Link kopierenLink in die Zwischenablage kopiert!
The following section covers deployment of the converged mode pods, independent mode pods, and using the *cns-deploy *tool.
-
It is recommended that a separate cluster for OpenShift Container Platform infrastructure workload (registry, logging and metrics) and application pod storage. Hence, if you have more than 6 nodes ensure you create multiple clusters with a minimum of 3 nodes each. The infrastructure cluster should belong to the
defaultproject namespace. - If you want to enable encryption on Red Hat Openshift Container Storage setup, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Enabling_Encryption before proceeding with the following steps.
You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, ** clusters: Array of clusters.
+ Each element on the array is a map which describes the cluster as follows.
nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage container
Each element on the array is a map which describes the node as follows
node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Copy the topology file from the default location to your location and then edit it:
cp /usr/share/heketi/topology-sample.json /<_Path_>/topology.json
# cp /usr/share/heketi/topology-sample.json /<_Path_>/topology.json
Edit the topology file based on the Red Hat Gluster Storage pod hostname under the node.hostnames.manage section and node.hostnames.storage section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.
Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool.To resolve this issue, restart the trusted storage pool which contains the Heketi volume.
A.3.3. Deploying Converged Mode Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands to deploy converged mode:
Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.json
# cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note- From Container-Native Storage 3.6, support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To deploy S3 compatible object store in Red Hat Openshift Container Storage see substep i below.
-
In the above command, the value for
admin-keyis the secret string for heketi admin user. The heketi administrator will have access toall APIs and commands. Default is to use no secret. The
BLOCK_HOST_SIZEparameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.json
# cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information on the cns-deploy commands, refer to the man page of cns-deploy.
For more information on the cns-deploy commands, refer to the man page of cns-deploy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow +
cns-deploy --help
# cns-deploy --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <admin-key> --user-key <user-key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <admin-key> --user-key <user-key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow object-account,object-user, andobject-passwordare required credentials for deploying the gluster-s3 container.If any of these are missing, gluster-s3 container deployment will be skipped.object-scandobject-capacityare optional parameters. Where,object-scis used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacityis the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:
heketi-cli topology info
# heketi-cli topology infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[]
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[]
Next step: If you are installing the independent mode 3.11, proceed to https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Updating_Registry.
A.3.3.1. Deploying Independent Mode Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands to deploy Red Hat Openshift Container Storage in Independent mode:
To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:
ssh-copy-id -i /root/.ssh/id_rsa root@<hostname>
# ssh-copy-id -i /root/.ssh/id_rsa root@<hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:
cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.json
# cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note- Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see substep i below.
-
In the above command, the value for
admin-keyis the secret string for heketi admin user. The heketi administrator will have access toall APIs and commands. Default is to use no secret. The
BLOCK_HOST_SIZEparameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.json
# cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information on the cns-deploy commands, refer to the man page of the cns-deploy.
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow +
cns-deploy --help
# cns-deploy --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <admin-key> --user-key <user-key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <admin-key> --user-key <user-key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow object-account,object-user, andobject-passwordare required credentials for deploying the gluster-s3 container.If any of these are missing, gluster-s3 container deployment will be skipped.object-scandobject-capacityare optional parameters. Where,object-scis used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacityis the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:
Execute the following command to enable brick multiplexing:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the heketidb volumes:
gluster vol stop heketidbstorage
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster vol start heketidbstorage
# gluster vol start heketidbstorage volume start: heketidbstorage: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:
heketi-cli topology info
# heketi-cli topology infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[].
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[].
Next step: If you are installing converged mode, proceed to https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Updating_Registry.