This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.此内容没有您所选择的语言版本。
8.2. Deploying Containerized Red Hat Gluster Storage Solutions
The following section covers deployment of the Container-Native Storage pods and Container-Ready Storage and using the
cns-deploy
tool.
Note
- It is recommended that a separate cluster for OpenShift Container Platform infrastructure workload (registry, logging and metrics) and application pod storage. Hence, if you have more than 6 nodes ensure you create multiple clusters with a minimum of 3 nodes each. The infrastructure cluster should belong to the
default
project namespace. - If you want to enable encryption on the Container-Native Storage setup, refer Chapter 17, Enabling Encryption before proceeding with the following steps.
- You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Note
Copy the topology file from the default location to your location and then edit it:cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
# cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.Important
Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool. To resolve this issue, restart the trusted storage pool which contains the Heketi volume.
To deploy Container-Native Storage, refer Section 8.2.1, “Deploying Container-Native Storage”. To deploy Container-Ready Storage refer Section 8.2.2, “Deploying Container-Ready Storage”.
8.2.1. Deploying Container-Native Storage
Execute the following commands to deploy container-native storage:
- Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -n <namespace> -g --admin-key <Key> topology.json
# cns-deploy -n <namespace> -g --admin-key <Key> topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Container-Native Storage is under technology preview. To deploy S3 compatible object store in Container-Native Storage see Step 1a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see Section 9.2, “Block Storage”). This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, refer Chapter 12, Managing Clusters
8.2.2. Deploying Container-Ready Storage
Execute the following commands to deploy container-ready storage:
- To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:
ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
# ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:
cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
# cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see Step 2a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see Section 9.2, “Block Storage”). This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption, and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:
- Execute the following command to enable brick multiplexing:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the heketidb volumes:
gluster vol stop heketidbstorage
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster vol start heketidbstorage
# gluster vol start heketidbstorage volume start: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, refer Chapter 12, Managing Clusters