Search

Chapter 12. Managing Clusters

download PDF
Heketi allows administrators to add and remove storage capacity by managing either a single or multiple Red Hat Gluster Storage clusters.

12.1. Increasing Storage Capacity

You can increase the storage capacity using any of the following ways:
  • Adding devices
  • Increasing cluster size
  • Adding an entirely new cluster.

12.1.1. Adding New Devices

You can add more devices to existing nodes to increase storage capacity. When adding more devices, you must ensure to add devices as a set. For example, when expanding a distributed replicated volume with a replica count of replica 2, then one device should be added to at least two nodes. If using replica 3, then at least one device should be added to at least three nodes.
You can add a device either using CLI, or the API, or by updating the topology JSON file. The sections ahead describe using heketi CLI and updating topology JSON file. For information on adding new devices using API, see Heketi API https://github.com/heketi/heketi/wiki/API#device_add

12.1.1.1.  Using Heketi CLI

Register the specified device. The following example command shows how to add a device /dev/sde to node d6f2c22f2757bf67b1486d868dcb7794:
# heketi-cli device add --name=/dev/sde --node=d6f2c22f2757bf67b1486d868dcb7794
OUTPUT:
Device added successfully

12.1.1.2.  Updating Topology File

You can add the new device to the node description in your topology JSON used to setup the cluster. Then rerun the command to load the topology.
Following is an example where a new /dev/sde drive added to the node:
In the file:
    {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "node4.example.com"
                            ],
                            "storage": [
                                "192.168.10.100"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb",
                        "/dev/sdc",
                        "/dev/sdd",
                        "/dev/sde"
                     ]
                }
Load the topology file:
# heketi-cli topology load --json=topology-sample.json
    Found node 192.168.10.100 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
        Adding device /dev/sde ... OK
    Found node 192.168.10.101 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
    Found node 192.168.10.102 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
    Found node 192.168.10.103 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd

12.1.2. Increasing Cluster Size

Another way to add storage to Heketi, is to add new nodes to the cluster. Like adding devices, you can add a new node to an existing cluster by either using CLI or the API or by updating the topology JSON file. When you add a new node to the cluster, then you must register new devices to that node.
The sections ahead describe using heketi CLI and updating topology JSON file. For information on adding new devices using API, see Heketi API: https://github.com/heketi/heketi/wiki/API#node_add

Note

Red Hat Gluster Storage pods have to be configured before proceeding with the following steps. To manually deploy the Red Hat Gluster Storage pods, refer Section A.2, “Deploying the Containers”

12.1.2.1.  Using Heketi CLI

Following shows an example of how to add new node in zone 1 to 597fceb5d6c876b899e48f599b988f54 cluster using the CLI:
# heketi-cli node add --zone=1 --cluster=597fceb5d6c876b899e48f599b988f54 --management-host-name=node4.example.com --storage-host-name=192.168.10.104

OUTPUT:
Node information:
Id: 095d5f26b56dc6c64564a9bc17338cbf
State: online
Cluster Id: 597fceb5d6c876b899e48f599b988f54
Zone: 1
Management Hostname node4.example.com
Storage Hostname 192.168.10.104
The following example command shows how to register /dev/sdb and /dev/sdc devices for 095d5f26b56dc6c64564a9bc17338cbf node:
# heketi-cli device add --name=/dev/sdb --node=095d5f26b56dc6c64564a9bc17338cbf
OUTPUT:
Device added successfully

# heketi-cli device add --name=/dev/sdc --node=095d5f26b56dc6c64564a9bc17338cbf
OUTPUT:
Device added successfully

12.1.2.2.  Updating Topology File

You can expand a cluster by adding a new node to your topology JSON file. When adding the new node you must add this node information after the existing ones so that the Heketi CLI identifies on which cluster this new node should be part of.
Following shows an example of how to add a new node and devices:
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "node4.example.com"
                            ],
                            "storage": [
                                "192.168.10.104"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb",
                        "/dev/sdc"
                     ]
                }
Load the topology file:
# heketi-cli topology load --json=topology-sample.json
    Found node 192.168.10.100 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
        Found device /dev/sde
    Found node 192.168.10.101 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
    Found node 192.168.10.102 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
    Found node 192.168.10.103 on cluster d6f2c22f2757bf67b1486d868dcb7794
        Found device /dev/sdb
        Found device /dev/sdc
        Found device /dev/sdd
    Creating node node4.example.com ... ID: ff3375aca6d98ed8a004787ab823e293
        Adding device /dev/sdb ... OK
        Adding device /dev/sdc ... OK

12.1.3. Adding a New Cluster

Storage capacity can also be increased by adding new clusters of Red Hat Gluster Storage. New clusters can be added in the following two ways based on the requirement:
  • Adding a new cluster to the existing Container-Native Storage
  • Adding another Container-Native Storage cluster in a new project

12.1.3.1. Adding a New Cluster to the Existing Container-Native Storage

To add a new cluster to the existing Container-Native Storage, execute the following commands:
  1. Verify that Container-Native Storage is deployed and working as expected in the existing project by executing the following command:
    # oc get ds
    For example:
    # oc get ds
    NAME        DESIRED   CURRENT   READY     NODE-SELECTOR            AGE
    glusterfs   3         3         3         storagenode=glusterfs    8m
  2. Add the label for each node, where the Red Hat Gluster Storage pods are to be added for the new cluster to start by executing the following command:
    # oc label node <NODE_NAME> storagenode=<node_label>
    where,
    • NODE_NAME: is the name of the newly created node
    • node_label: The name that is used in the existing deamonSet.
    For example:
    # oc label node 192.168.90.3 storagenode=glusterfs
    node "192.168.90.3" labeled
  3. Verify if the Red Hat Gluster Storage pods are running by executing the folowing command:
    # oc get ds
    For example:
    # oc get ds
    NAME        DESIRED   CURRENT   READY     NODE-SELECTOR            AGE
    glusterfs   6         6         6         storagenode=glusterfs    8m
  4. Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
    For example:
    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node1.example.com"
                                ],
                                "storage": [
                                    "192.168.68.3"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node2.example.com"
                                ],
                                "storage": [
                                    "192.168.68.2"
                                ]
                            },
                            "zone": 2
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
    
    .......
    .......
    where,
    • clusters: Array of clusters.
      Each element on the array is a map which describes the cluster as follows.
      • nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage container
        Each element on the array is a map which describes the node as follows
        • node: It is a map of the following elements:
          • zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
          • hostnames: It is a map which lists the manage and storage addresses
            • manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
            • storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
        • devices: Name of each disk to be added
    Edit the topology file based on the Red Hat Gluster Storage pod hostname under the node.hostnames.manage section and node.hostnames.storage section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.
  5. For the existing cluster, heketi-cli will be available to load the new topology. Run the command to add the new topology to heketi:
    # heketi-cli topology load --json=<topology file path>
    For example:
    # heketi-cli topology load --json=topology.json
    Creating cluster ... ID: 94877b3f72b79273e87c1e94201ecd58
     Creating node node4.example.com ... ID: 95cefa174c7210bd53072073c9c041a3
            Adding device /dev/sdb ... OK
            Adding device /dev/sdc ... OK
            Adding device /dev/sdd ... OK
            Adding device /dev/sde ... OK
        Creating node node5.example.com ... ID: f9920995e580f0fe56fa269d3f3f8428
            Adding device /dev/sdb ... OK
            Adding device /dev/sdc ... OK
            Adding device /dev/sdd ... OK
            Adding device /dev/sde ... OK
        Creating node node6.example.com ... ID: 73fe4aa89ba35c51de4a51ecbf52544d
            Adding device /dev/sdb ... OK
            Adding device /dev/sdc ... OK
            Adding device /dev/sdd ... OK
            Adding device /dev/sde ... OK

12.1.3.2. Adding Another Container-Native Storage Cluster in a New Project

To add another Container-Native Storage in a new project to, execute the following commands:

Note

As Node label is global, there can be conflicts to start Red Hat Gluster Storage DaemonSets with same label in two different projects. Node label is an argument to cns-deploy, thereby enabling deploying multiple trusted storage pool by using a different label in different project.
  1. Create a new project by executing the following command:
    # oc new-project <new_project_name>
    For example:
    # oc new-project storage-project-2
    
    Now using project "storage-project-2" on server "https://master.example.com:8443"
  2. After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
    # oadm policy add-scc-to-user privileged -z storage-project-2
    # oadm policy add-scc-to-user privileged -z default
  3. Create a new topology file for the new cluster. You must provide a topology file for the new cluster which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
    For example:
    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node1.example.com"
                                ],
                                "storage": [
                                    "192.168.68.3"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node2.example.com"
                                ],
                                "storage": [
                                    "192.168.68.2"
                                ]
                            },
                            "zone": 2
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
    
    .......
    .......
    where,
    • clusters: Array of clusters.
      Each element on the array is a map which describes the cluster as follows.
      • nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage container
        Each element on the array is a map which describes the node as follows
        • node: It is a map of the following elements:
          • zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
          • hostnames: It is a map which lists the manage and storage addresses
            • manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
            • storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
        • devices: Name of each disk to be added
    Edit the topology file based on the Red Hat Gluster Storage pod hostname under the node.hostnames.manage section and node.hostnames.storage section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.
  4. Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
    # cns-deploy -n <namespace> --daemonset-label <NODE_LABEL> -g topology.json
    For example:
    # cns-deploy -n storage-project-2 --daemonset-label glusterfs2 -g topology.json 
    Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
    
    Before getting started, this script has some requirements of the execution
    environment and of the container platform that you should verify.
    
    The client machine that will run this script must have:
     * Administrative access to an existing Kubernetes or OpenShift cluster
     * Access to a python interpreter 'python'
     * Access to the heketi client 'heketi-cli'
    
    Each of the nodes that will host GlusterFS must also have appropriate firewall
    rules for the required GlusterFS ports:
     * 2222  - sshd (if running GlusterFS in a pod)
     * 24007 - GlusterFS Daemon
     * 24008 - GlusterFS Management
     * 49152 to 49251 - Each brick for every volume on the host requires its own
       port. For every new brick, one new port will be used starting at 49152. We
       recommend a default range of 49152-49251 on each host, though you can adjust
       this to fit your needs.
    
    In addition, for an OpenShift deployment you must:
     * Have 'cluster_admin' role on the administrative account doing the deployment
     * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
     * Have a router deployed that is configured to allow apps to access services
       running in the cluster
    
    Do you wish to proceed with deployment?
    
    [Y]es, [N]o? [Default: Y]: Y
    Using OpenShift CLI.
    NAME                STATUS    AGE
    storage-project-2   Active    2m
    Using namespace "storage-project-2".
    Checking that heketi pod is not running ... OK
    template "deploy-heketi" created
    serviceaccount "heketi-service-account" created
    template "heketi" created
    template "glusterfs" created
    role "edit" added: "system:serviceaccount:storage-project-2:heketi-service-account"
    node "192.168.35.5" labeled
    node "192.168.35.6" labeled
    node "192.168.35.7" labeled
    daemonset "glusterfs" created
    Waiting for GlusterFS pods to start ... OK
    service "deploy-heketi" created
    route "deploy-heketi" created
    deploymentconfig "deploy-heketi" created
    Waiting for deploy-heketi pod to start ... OK
    Creating cluster ... ID: fde139c21b0afcb6206bf272e0df1590
    Creating node 192.168.35.5 ... ID: 0768a1ee35dce4cf707c7a1e9caa3d2a
    Adding device /dev/vdc ... OK
    Adding device /dev/vdd ... OK
    Adding device /dev/vde ... OK
    Adding device /dev/vdf ... OK
    Creating node 192.168.35.6 ... ID: 63966f6ffd48c1980c4a2d03abeedd04
    Adding device /dev/vdc ... OK
    Adding device /dev/vdd ... OK
    Adding device /dev/vde ... OK
    Adding device /dev/vdf ... OK
    Creating node 192.168.35.7 ... ID: de129c099193aaff2c64dca825f33558
    Adding device /dev/vdc ... OK
    Adding device /dev/vdd ... OK
    Adding device /dev/vde ... OK
    Adding device /dev/vdf ... OK
    heketi topology loaded.
    Saving heketi-storage.json
    secret "heketi-storage-secret" created
    endpoints "heketi-storage-endpoints" created
    service "heketi-storage-endpoints" created
    job "heketi-storage-copy-job" created
    deploymentconfig "deploy-heketi" deleted
    route "deploy-heketi" deleted
    service "deploy-heketi" deleted
    job "heketi-storage-copy-job" deleted
    pod "deploy-heketi-1-d0qrs" deleted
    secret "heketi-storage-secret" deleted
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Waiting for heketi pod to start ... OK
    heketi is now running.
    Ready to create and provide GlusterFS volumes.

    Note

    For more information on the cns-deploy commands, refer to the man page of the cns-deploy.
    # cns-deploy --help
  5. Verify that Container-Native Storage is deployed and working as expected in the new project with the new daemonSet label by executing the following command:
    # oc get ds
    For example:
    # oc get ds
    NAME        DESIRED   CURRENT   READY     NODE-SELECTOR            AGE
    glusterfs   3         3         3         storagenode=glusterfs2   8m
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.