This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 12. Managing Clusters
Heketi allows administrators to add and remove storage capacity by managing either a single or multiple Red Hat Gluster Storage clusters.
12.1. Increasing Storage Capacity Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
You can increase the storage capacity using any of the following ways:
- Adding devices
- Increasing cluster size
- Adding an entirely new cluster.
12.1.1. Adding New Devices Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
You can add more devices to existing nodes to increase storage capacity. When adding more devices, you must ensure to add devices as a set. For example, when expanding a distributed replicated volume with a replica count of replica 2, then one device should be added to at least two nodes. If using replica 3, then at least one device should be added to at least three nodes.
You can add a device either using CLI, or the API, or by updating the topology JSON file. The sections ahead describe using heketi CLI and updating topology JSON file. For information on adding new devices using API, see Heketi API https://github.com/heketi/heketi/wiki/API#device_add
12.1.1.1. Using Heketi CLI Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
Register the specified device. The following example command shows how to add a device
/dev/sde
to node d6f2c22f2757bf67b1486d868dcb7794
:
heketi-cli device add --name=/dev/sde --node=d6f2c22f2757bf67b1486d868dcb7794
# heketi-cli device add --name=/dev/sde --node=d6f2c22f2757bf67b1486d868dcb7794
OUTPUT:
Device added successfully
12.1.1.2. Updating Topology File Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
You can add the new device to the node description in your topology JSON used to setup the cluster. Then rerun the command to load the topology.
Following is an example where a new
/dev/sde
drive added to the node:
In the file:
Load the topology file:
12.1.2. Increasing Cluster Size Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
Another way to add storage to Heketi, is to add new nodes to the cluster. Like adding devices, you can add a new node to an existing cluster by either using CLI or the API or by updating the topology JSON file. When you add a new node to the cluster, then you must register new devices to that node.
The sections ahead describe using heketi CLI and updating topology JSON file. For information on adding new devices using API, see Heketi API: https://github.com/heketi/heketi/wiki/API#node_add
Note
Red Hat Gluster Storage pods have to be configured before proceeding with the following steps. To manually deploy the Red Hat Gluster Storage pods, refer Section A.2, “Deploying the Containers”
12.1.2.1. Using Heketi CLI Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
Following shows an example of how to add new node in
zone 1
to 597fceb5d6c876b899e48f599b988f54
cluster using the CLI:
The following example command shows how to register
/dev/sdb
and /dev/sdc
devices for 095d5f26b56dc6c64564a9bc17338cbf
node:
12.1.2.2. Updating Topology File Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
You can expand a cluster by adding a new node to your topology JSON file. When adding the new node you must add this node information
after
the existing ones so that the Heketi CLI identifies on which cluster this new node should be part of.
Following shows an example of how to add a new node and devices:
Load the topology file:
12.1.3. Adding a New Cluster Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
Storage capacity can also be increased by adding new clusters of Red Hat Gluster Storage. New clusters can be added in the following two ways based on the requirement:
- Adding a new cluster to the existing Container-Native Storage
- Adding another Container-Native Storage cluster in a new project
12.1.3.1. Adding a New Cluster to the Existing Container-Native Storage Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
To add a new cluster to the existing Container-Native Storage, execute the following commands:
- Verify that Container-Native Storage is deployed and working as expected in the existing project by executing the following command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 3 3 3 storagenode=glusterfs 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the label for each node, where the Red Hat Gluster Storage pods are to be added for the new cluster to start by executing the following command:
oc label node <NODE_NAME> storagenode=<node_label>
# oc label node <NODE_NAME> storagenode=<node_label>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- NODE_NAME: is the name of the newly created node
- node_label: The name that is used in the existing deamonSet.
For example:oc label node 192.168.90.3 storagenode=glusterfs
# oc label node 192.168.90.3 storagenode=glusterfs node "192.168.90.3" labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if the Red Hat Gluster Storage pods are running by executing the folowing command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 6 6 6 storagenode=glusterfs 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each. - For the existing cluster, heketi-cli will be available to load the new topology. Run the command to add the new topology to heketi:
heketi-cli topology load --json=<topology file path>
# heketi-cli topology load --json=<topology file path>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3.2. Adding Another Container-Native Storage Cluster in a New Project Link kopierenLink in die Zwischenablage kopiert!
Link kopierenLink in die Zwischenablage kopiert!
To add another Container-Native Storage in a new project to, execute the following commands:
Note
As Node label is global, there can be conflicts to start Red Hat Gluster Storage DaemonSets with same label in two different projects. Node label is an argument to cns-deploy, thereby enabling deploying multiple trusted storage pool by using a different label in different project.
- Create a new project by executing the following command:
oc new-project <new_project_name>
# oc new-project <new_project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc new-project storage-project-2
# oc new-project storage-project-2 Now using project "storage-project-2" on server "https://master.example.com:8443"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
oadm policy add-scc-to-user privileged -z storage-project-2 oadm policy add-scc-to-user privileged -z default
# oadm policy add-scc-to-user privileged -z storage-project-2 # oadm policy add-scc-to-user privileged -z default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each. - Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -n <namespace> --daemonset-label <NODE_LABEL> -g topology.json
# cns-deploy -n <namespace> --daemonset-label <NODE_LABEL> -g topology.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.cns-deploy --help
# cns-deploy --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that Container-Native Storage is deployed and working as expected in the new project with the new daemonSet label by executing the following command:
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get ds
# oc get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE glusterfs 3 3 3 storagenode=glusterfs2 8m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow