5.2. Managing Volumes using Heketi

download PDF
Heketi provides a RESTful management interface which can be used to manage the lifecycle of Red Hat Gluster Storage volumes. With Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can dynamically provision Red Hat Gluster Storage volumes with any of the supported durability types. Heketi will automatically determine the location for bricks across the cluster, making sure to place bricks and its replicas across different failure domains. Heketi also supports any number of Red Hat Gluster Storage clusters, allowing cloud services to provide network file storage without being limited to a single Red Hat Gluster Storage cluster.
With Heketi, the administrator no longer manages or configures bricks, disks, or trusted storage pools. Heketi service will manage all hardware for the administrator, enabling it to allocate storage on demand. Any disks registered with Heketi must be provided in raw format, which will then be managed by it using LVM on the disks provided.


The replica 3 volume type is the default and the only supported volume type that can be created using Heketi.
Heketi volume creation

Figure 5.1. Heketi volume creation

A create volume request to Heketi leads it to select bricks spread across 2 zones and 4 nodes. After the volume is created in Red hat Gluster Storage, Heketi provides the volume information to the service that initially made the request.
Heketi can be configured and executed using the CLI or the API. The sections ahead describe configuring Heketi using the CLI.

5.2.1. Prerequisites

Ensure that the following requirements are met:
Configure SSH access
Configure key-based SSH authentication without a password for the Heketi user. For a non-root user:
  • Ensure the user and server specified when copying SSH keys matches the user provided to Heketi in the Heketi configuration file.
  • Ensure the user can use sudo by disabling requiretty in the /etc/sudoers file and adding sudo: true to the sshexec configuration section in the Heketi configuration file.
Configure the firewall
Ensure that Heketi can accept TCP requests over the port specified in the heketi.json file. For example, on Red Hat Enterprise Linux 7 based installations, run the following commands:
# firewall-cmd --zone=zone_name --add-port=port/tcp
# firewall-cmd --zone=zone_name --add-port=port/tcp --permanent
On Red Hat Enterprise Linux 6 based installations, run the following commands:
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport port -j ACCEPT
# service iptables save
Start glusterd
After Red Hat Gluster Storage is installed, ensure that the glusterd service is started.
Ensure disks are raw format
Disks to be registered with Heketi must be in the raw format.

5.2.2. Installing Heketi


Heketi is supported only on Red Hat Enterprise Linux 7.
After installing Red Hat Gluster Storage 3.4, execute the following command to install the heketi-client:
 # yum install heketi-client
heketi-client has the binary for the heketi command line tool.
Execute the following command to install heketi:
# yum install heketi
For more information about subscribing to the required channels and installing Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide.

5.2.3. Starting the Heketi Server

Before starting the server, ensure that the following prerequisites are met:
  • Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command:
    # ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
  • Change the owner and the group permissions for the heketi keys using the following command:
    # chown heketi:heketi /etc/heketi/heketi_key*
  • Set up key-based SSH authentication access between Heketi and the Red Hat Gluster Storage servers by running the following command:
    # ssh-copy-id -i /etc/heketi/ root@server
  • As a non root user, set up password-less SSH access between Heketi and the Red Hat Gluster Storage servers by running the following command:
    $ ssh-copy-id -i /etc/heketi/ user@server
  • Note

    To run SSH as a non-root, the username mentioned in username>@server for ssh-copy-id must match with the user name provided to Heketi in the Heketi configuration file below.
  • Setup the heketi.json configuration file. The file is located in /etc/heketi/heketi.json. The configuration file has the information required to run the Heketi server. The config file must be in JSON format with the following settings:
    • port: string, Heketi REST service port number
    • use_auth: bool, Enable JWT Authentication
    • jwt: map, JWT Authentication settings
      • admin: map, Settings for the Heketi administrator
        • key: string,
        • user: map, Settings for the Heketi volume requests access user
        • key: string, t
    • glusterfs: map, Red Hat Gluster Storage settings
      • executor: string, Determines the type of command executor to use. Possible values are:
        • mock: Does not send any commands out to servers. Can be used for development and tests
        • ssh: Sends commands to real systems over ssh
      • db: string, Location of Heketi database
      • sshexec: map, SSH configuration
        • keyfile: string, File with private ssh key
        • user: string, SSH user
    Following is an example of the JSON file:
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",
      "_use_auth": "Enable JWT authorization. Please enable for deployment",
      "use_auth": false,
      "_jwt": "Private keys for access",
      "jwt": {
        "_admin": "Admin has access to all APIs",
        "admin": {
          "key": "My Secret"
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "My Secret"
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": [
          "Execute plugin. Possible choices: mock, ssh",
          "mock: This setting is used for testing and development.",
          "      It will not send commands to any node.",
          "ssh:  This setting will notify Heketi to ssh to the nodes.",
          "      It will need the values in sshexec to be configured.",
          "kubernetes: Communicate with GlusterFS containers over",
          "            Kubernetes exec api."
        "executor": "ssh",
        "_sshexec_comment": "SSH username and private key file information",
        "sshexec": {
          "keyfile": "path/to/private_key",
          "user": "sshuser",
          "port": "Optional: ssh port.  Default is 22",
          "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab",
          "sudo": "Optional: set to true if SSH as a non root user. Default is false."
        "_kubeexec_comment": "Kubernetes configuration",
        "kubeexec": {
          "host" :"",
          "cert" : "/path/to/crt.file",
          "insecure": false,
          "user": "kubernetes username",
          "password": "password for kubernetes user",
          "namespace": "OpenShift project or Kubernetes namespace",
          "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
        "_db_comment": "Database file name",
        "db": "/var/lib/heketi/heketi.db",
        "_loglevel_comment": [
          "Set log level. Choices are:",
          "  none, critical, error, warning, info, debug",
          "Default is warning"
        "loglevel" : "debug"


    The location for the private SSH key that is created must be set in the keyfile setting of the configuration file, and the key should be readable by the heketi user. Starting the Server

For Red Hat Enterprise Linux 7

  1. Enable heketi by executing the following command:
    # systemctl enable heketi
  2. Start the Heketi server, by executing the following command:
    # systemctl start heketi
  3. To check the status of the Heketi server, execute the following command:
    # systemctl status heketi
  4. To check the logs, execute the following command:
    # journalctl -u heketi


After Heketi is configured to manage the trusted storage pool, gluster commands should not be run on it, as this will make the heketidb inconsistent, leading to unexpected behaviors with Heketi. Verifying the Configuration

To verify if the server is running, execute the following step:
If Heketi is not setup with authentication, then use curl to verify the configuration:
# curl http://<server:port>/hello
You can also verify the configuration using the heketi-cli when authentication is enabled:
# heketi-cli --server http://<server:port> --user <user> --secret <secret> cluster list

5.2.4. Setting up the Topology

Setting up the topology allows Heketi to determine which nodes, disks, and clusters to use. Prerequisites

You have to determine the node failure domains and clusters of nodes. Failure domains is a value given to a set of nodes which share the same switch, power supply, or anything else that would cause them to fail at the same time. Heketi uses this information to make sure that replicas are created across failure domains, thus providing cloud services volumes which are resilient to both data unavailability and data loss.
You have to determine which nodes would constitute a cluster. Heketi supports multiple Red Hat Gluster Storage clusters, which gives cloud services the option of specifying a set of clusters where a volume must be created. This provides cloud services and administrators the option of creating SSD, SAS, SATA, or any other type of cluster which provide a specific quality of service to users.


Heketi does not have a mechanism today to study and build its database from an existing system. So, a new trusted storage pool has to be configured that can be used by Heketi. Topology Setup

The command line client loads the information about creating a cluster, adding nodes to that cluster, and then adding disks to each one of those nodes.This information is added into the topology file. To load a topology file with heketi-cli, execute the following command:


A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
# export HEKETI_CLI_SERVER=http://<heketi_server:port>
# heketi-cli topology load --json=<topology_file>
Where topology_file is a file in JSON format describing the clusters, nodes, and disks to add to Heketi. The format of the file is as follows:
clusters: Array of clusters
  • Each element on the array is a map which describes the cluster as follows
    • nodes: Array of nodes in a cluster
      Each element on the array is a map which describes the node as follows
      • node: Same as Node Add, except there is no need to supply the cluster ID.
      • devices: Name of each disk to be added
      • zone: The value represents failure domain on which the node exists.
For example:
  1. Topology file:
        "clusters": [
                "nodes": [
                        "node": {
                            "hostnames": {
                                "manage": [
                                "storage": [
                            "zone": 1
                        "devices": [
                        "node": {
                            "hostnames": {
                                "manage": [
                                "storage": [
                            "zone": 2
                        "devices": [
  2. Load the Heketi JSON file:
    # heketi-cli topology load --json=topology_libvirt.json
    Creating cluster ... ID: a0d9021ad085b30124afbcf8df95ec06
            Creating node ... ID: b455e763001d7903419c8ddd2f58aea0
                    Adding device /dev/vdb ... OK
                    Adding device /dev/vdc ... OK
            Creating node ... ID: 4635bc1fe7b1394f9d14827c7372ef54
                    Adding device /dev/vdb ... OK
                    Adding device /dev/vdc ... OK
  3. Execute the following command to check the details of a particular node:
    # heketi-cli node info b455e763001d7903419c8ddd2f58aea0
    Node Id: b455e763001d7903419c8ddd2f58aea0
    Cluster Id: a0d9021ad085b30124afbcf8df95ec06
    Zone: 1
    Management Hostname:
    Storage Hostname:
    Id:0ddba53c70537938f3f06a65a4a7e88b   Name:/dev/vdi            Size (GiB):499     Used (GiB):0       Free (GiB):499
    Id:4fae3aabbaf79d779795824ca6dc433a   Name:/dev/vdg            Size (GiB):499     Used (GiB):0       Free (GiB):499
  4. Execute the following command to check the details of the cluster:
    # heketi-cli cluster info a0d9021ad085b30124afbcf8df95ec06
    Cluster id: a0d9021ad085b30124afbcf8df95ec06
  5. To check the details of the device, execute the following command:
    # heketi-cli device info 0ddba53c70537938f3f06a65a4a7e88b
    Device Id: 0ddba53c70537938f3f06a65a4a7e88b
    Name: /dev/vdi
    Size (GiB): 499
    Used (GiB): 0
    Free (GiB): 499

5.2.5. Creating a Volume

After Heketi is set up, you can use the CLI to create a volume.
  1. Execute the following command to check the various option for creating a volume:
    # heketi-cli volume create --size=<size in Gb> [options]
  2. For example: After setting up the topology file with two nodes on one failure domain, and two nodes in another failure domain, create a 100Gb volume using the following command:
    # heketi-cli volume create --size=100
    Name: vol_0729fe8ce9cee6eac9ccf01f84dc88cc
    Size: 100
    Id: 0729fe8ce9cee6eac9ccf01f84dc88cc
    Cluster Id: a0d9021ad085b30124afbcf8df95ec06
    Mount Options: backupvolfile-servers=,
    Durability Type: replicate
    Replica: 3
    Snapshot: Disabled
    Id: 8998961142c1b51ab82d14a4a7f4402d
    Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_8998961142c1b51ab82d14a4a7f4402d/brick
    Size (GiB): 50
    Node: b455e763001d7903419c8ddd2f58aea0
    Device: 0ddba53c70537938f3f06a65a4a7e88b
  3. To check the details of the device, execute the following command:
    # heketi-cli device info 0ddba53c70537938f3f06a65a4a7e88b
    Device Id: 0ddba53c70537938f3f06a65a4a7e88b
    Name: /dev/vdi
    Size (GiB): 499
    Used (GiB): 201
    Free (GiB): 298
    Id:0f1766cc142f1828d13c01e6eed12c74   Size (GiB):50      Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_0f1766cc142f1828d13c01e6eed12c74/brick
    Id:5d944c47779864b428faa3edcaac6902   Size (GiB):50      Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_5d944c47779864b428faa3edcaac6902/brick
    Id:8998961142c1b51ab82d14a4a7f4402d   Size (GiB):50      Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_8998961142c1b51ab82d14a4a7f4402d/brick
    Id:a11e7246bb21b34a157e0e1fd598b3f9   Size (GiB):50      Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_a11e7246bb21b34a157e0e1fd598b3f9/brick

5.2.6. Expanding a Volume

Heketi expands a volume size by using add-brick command. The volume id has to be provided to perform volume expansion.
  1. Find the volume id using the volume list command.
    # heketi-cli volume list
    Id:9d219903604cabed5ba234f4f04b2270    Cluster:dab7237f6d6d4825fca8b83a0fac24ac    Name:vol_9d219903604cabed5ba234f4f04b2270
    Id:a8770efe13a2269a051712905449f1c1    Cluster:dab7237f6d6d4825fca8b83a0fac24ac    Name:user1vol1
  2. This volume id can be used as input to heketi-cli for expanding the volume.
    # heketi-cli volume expand --volume <volume_id> --expand-size <size>
    For example:
    # heketi-cli volume expand --volume a8770efe13a2269a051712905449f1c1 --expand-size 30
    Name: user1vol1
    Size: 130
    Volume Id: a8770efe13a2269a051712905449f1c1
    Cluster Id: dab7237f6d6d4825fca8b83a0fac24ac
    Mount Options: backup-volfile-servers=,
    Block: false
    Free Size: 0
    Block Volumes: []
    Durability Type: replicate
    Distributed+Replica: 3

5.2.7. Deleting a Volume

To delete a volume, execute the following command:
# heketi-cli volume delete <vol_id>
For example:
$ heketi-cli volume delete 0729fe8ce9cee6eac9ccf01f84dc88cc
Volume 0729fe8ce9cee6eac9ccf01f84dc88cc deleted
Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.