このコンテンツは選択した言語では利用できません。
5.2. Managing Volumes using Heketi
Heketi provides a RESTful management interface which can be used to manage the lifecycle of Red Hat Gluster Storage volumes. With Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can dynamically provision Red Hat Gluster Storage volumes with any of the supported durability types. Heketi will automatically determine the location for bricks across the cluster, making sure to place bricks and its replicas across different failure domains. Heketi also supports any number of Red Hat Gluster Storage clusters, allowing cloud services to provide network file storage without being limited to a single Red Hat Gluster Storage cluster.
With Heketi, the administrator no longer manages or configures bricks, disks, or trusted storage pools. Heketi service will manage all hardware for the administrator, enabling it to allocate storage on demand. Any disks registered with Heketi must be provided in raw format, which will then be managed by it using LVM on the disks provided.
Note
The replica 3 volume type is the default and the only supported volume type that can be created using Heketi.
Figure 5.1. Heketi volume creation
A create volume request to Heketi leads it to select bricks spread across 2 zones and 4 nodes. After the volume is created in Red hat Gluster Storage, Heketi provides the volume information to the service that initially made the request.
Heketi can be configured and executed using the CLI or the API. The sections ahead describe configuring Heketi using the CLI.
5.2.1. Prerequisites
Ensure that the following requirements are met:
- Configure SSH access
- Configure key-based SSH authentication without a password for the Heketi user. For a non-root user:
- Ensure the user and server specified when copying SSH keys matches the user provided to Heketi in the Heketi configuration file.
- Ensure the user can use
sudo
by disablingrequiretty
in the/etc/sudoers
file and addingsudo: true
to the sshexec configuration section in the Heketi configuration file.
- Configure the firewall
- Ensure that Heketi can accept TCP requests over the port specified in the
heketi.json
file. For example, on Red Hat Enterprise Linux 7 based installations, run the following commands:# firewall-cmd --zone=zone_name --add-port=port/tcp # firewall-cmd --zone=zone_name --add-port=port/tcp --permanent
On Red Hat Enterprise Linux 6 based installations, run the following commands:# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport port -j ACCEPT # service iptables save
- Start glusterd
- After Red Hat Gluster Storage is installed, ensure that the
glusterd
service is started. - Ensure disks are raw format
- Disks to be registered with Heketi must be in the raw format.
5.2.2. Installing Heketi
Note
Heketi is supported only on Red Hat Enterprise Linux 7.
After installing Red Hat Gluster Storage 3.4, execute the following command to install the heketi-client:
# yum install heketi-client
heketi-client has the binary for the heketi command line tool.
Execute the following command to install heketi:
# yum install heketi
For more information about subscribing to the required channels and installing Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide.
5.2.3. Starting the Heketi Server
Before starting the server, ensure that the following prerequisites are met:
- Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command:
# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
- Change the owner and the group permissions for the heketi keys using the following command:
# chown heketi:heketi /etc/heketi/heketi_key*
- Set up key-based SSH authentication access between Heketi and the Red Hat Gluster Storage servers by running the following command:
# ssh-copy-id -i /etc/heketi/heketi_key.pub root@server
- As a non root user, set up password-less SSH access between Heketi and the Red Hat Gluster Storage servers by running the following command:
$ ssh-copy-id -i /etc/heketi/heketi_key.pub user@server
Note
To run SSH as a non-root, the username mentioned inusername>@server
for ssh-copy-id must match with the user name provided to Heketi in the Heketi configuration file below.- Setup the heketi.json configuration file. The file is located in /etc/heketi/heketi.json. The configuration file has the information required to run the Heketi server. The config file must be in JSON format with the following settings:
- port: string, Heketi REST service port number
- use_auth: bool, Enable JWT Authentication
- jwt: map, JWT Authentication settings
- admin: map, Settings for the Heketi administrator
- key: string,
- user: map, Settings for the Heketi volume requests access user
- key: string, t
- glusterfs: map, Red Hat Gluster Storage settings
- executor: string, Determines the type of command executor to use. Possible values are:
- mock: Does not send any commands out to servers. Can be used for development and tests
- ssh: Sends commands to real systems over ssh
- db: string, Location of Heketi database
- sshexec: map, SSH configuration
- keyfile: string, File with private ssh key
- user: string, SSH user
Following is an example of the JSON file:{ "_port_comment": "Heketi Server Port Number", "port": "8080", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": false, "_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "key": "My Secret" }, "_user": "User only has access to /volumes endpoint", "user": { "key": "My Secret" } }, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs": { "_executor_comment": [ "Execute plugin. Possible choices: mock, ssh", "mock: This setting is used for testing and development.", " It will not send commands to any node.", "ssh: This setting will notify Heketi to ssh to the nodes.", " It will need the values in sshexec to be configured.", "kubernetes: Communicate with GlusterFS containers over", " Kubernetes exec api." ], "executor": "ssh", "_sshexec_comment": "SSH username and private key file information", "sshexec": { "keyfile": "path/to/private_key", "user": "sshuser", "port": "Optional: ssh port. Default is 22", "fstab": "Optional: Specify fstab file on node. Default is /etc/fstab", "sudo": "Optional: set to true if SSH as a non root user. Default is false." }, "_kubeexec_comment": "Kubernetes configuration", "kubeexec": { "host" :"https://kubernetes.host:8443", "cert" : "/path/to/crt.file", "insecure": false, "user": "kubernetes username", "password": "password for kubernetes user", "namespace": "OpenShift project or Kubernetes namespace", "fstab": "Optional: Specify fstab file on node. Default is /etc/fstab" }, "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", "_loglevel_comment": [ "Set log level. Choices are:", " none, critical, error, warning, info, debug", "Default is warning" ], "loglevel" : "debug" } }
Note
The location for the private SSH key that is created must be set in thekeyfile
setting of the configuration file, and the key should be readable by the heketi user.
5.2.3.1. Starting the Server
For Red Hat Enterprise Linux 7
- Enable heketi by executing the following command:
# systemctl enable heketi
- Start the Heketi server, by executing the following command:
# systemctl start heketi
- To check the status of the Heketi server, execute the following command:
# systemctl status heketi
- To check the logs, execute the following command:
# journalctl -u heketi
Note
After Heketi is configured to manage the trusted storage pool, gluster commands should not be run on it, as this will make the heketidb inconsistent, leading to unexpected behaviors with Heketi.
5.2.3.2. Verifying the Configuration
To verify if the server is running, execute the following step:
If Heketi is not setup with authentication, then use curl to verify the configuration:
# curl http://<server:port>/hello
You can also verify the configuration using the heketi-cli when authentication is enabled:
# heketi-cli --server http://<server:port> --user <user> --secret <secret> cluster list
5.2.4. Setting up the Topology
Setting up the topology allows Heketi to determine which nodes, disks, and clusters to use.
5.2.4.1. Prerequisites
You have to determine the node failure domains and clusters of nodes. Failure domains is a value given to a set of nodes which share the same switch, power supply, or anything else that would cause them to fail at the same time. Heketi uses this information to make sure that replicas are created across failure domains, thus providing cloud services volumes which are resilient to both data unavailability and data loss.
You have to determine which nodes would constitute a cluster. Heketi supports multiple Red Hat Gluster Storage clusters, which gives cloud services the option of specifying a set of clusters where a volume must be created. This provides cloud services and administrators the option of creating SSD, SAS, SATA, or any other type of cluster which provide a specific quality of service to users.
Note
Heketi does not have a mechanism today to study and build its database from an existing system. So, a new trusted storage pool has to be configured that can be used by Heketi.
5.2.4.2. Topology Setup
The command line client loads the information about creating a cluster, adding nodes to that cluster, and then adding disks to each one of those nodes.This information is added into the topology file. To load a topology file with heketi-cli, execute the following command:
Note
A sample, formatted topology file (
topology-sample.json
) is installed with the ‘heketi-client’ package in the /usr/share/heketi/
directory.
# export HEKETI_CLI_SERVER=http://<heketi_server:port> # heketi-cli topology load --json=<topology_file>
Where
topology_file
is a file in JSON format describing the clusters, nodes, and disks to add to Heketi. The format of the file is as follows:
clusters: Array of clusters
- Each element on the array is a map which describes the cluster as follows
- nodes: Array of nodes in a clusterEach element on the array is a map which describes the node as follows
- node: Same as Node Add, except there is no need to supply the cluster ID.
- devices: Name of each disk to be added
- zone: The value represents failure domain on which the node exists.
For example:
- Topology file:
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "10.0.0.1" ], "storage": [ "10.0.0.1" ] }, "zone": 1 }, "devices": [ "/dev/sdb", "/dev/sdc", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi" ] }, { "node": { "hostnames": { "manage": [ "10.0.0.2" ], "storage": [ "10.0.0.2" ] }, "zone": 2 }, "devices": [ "/dev/sdb", "/dev/sdc", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi" ] }, ....... .......
- Load the Heketi JSON file:
# heketi-cli topology load --json=topology_libvirt.json Creating cluster ... ID: a0d9021ad085b30124afbcf8df95ec06 Creating node 192.168.10.100 ... ID: b455e763001d7903419c8ddd2f58aea0 Adding device /dev/vdb ... OK Adding device /dev/vdc ... OK ……. Creating node 192.168.10.101 ... ID: 4635bc1fe7b1394f9d14827c7372ef54 Adding device /dev/vdb ... OK Adding device /dev/vdc ... OK ………….
- Execute the following command to check the details of a particular node:
# heketi-cli node info b455e763001d7903419c8ddd2f58aea0 Node Id: b455e763001d7903419c8ddd2f58aea0 Cluster Id: a0d9021ad085b30124afbcf8df95ec06 Zone: 1 Management Hostname: 192.168.10.100 Storage Hostname: 192.168.10.100 Devices: Id:0ddba53c70537938f3f06a65a4a7e88b Name:/dev/vdi Size (GiB):499 Used (GiB):0 Free (GiB):499 Id:4fae3aabbaf79d779795824ca6dc433a Name:/dev/vdg Size (GiB):499 Used (GiB):0 Free (GiB):499 …………….
- Execute the following command to check the details of the cluster:
# heketi-cli cluster info a0d9021ad085b30124afbcf8df95ec06 Cluster id: a0d9021ad085b30124afbcf8df95ec06 Nodes: 4635bc1fe7b1394f9d14827c7372ef54 802a3bfab2d0295772ea4bd39a97cd5e b455e763001d7903419c8ddd2f58aea0 ff9eeb735da341f8772d9415166b3f9d Volumes:
- To check the details of the device, execute the following command:
# heketi-cli device info 0ddba53c70537938f3f06a65a4a7e88b Device Id: 0ddba53c70537938f3f06a65a4a7e88b Name: /dev/vdi Size (GiB): 499 Used (GiB): 0 Free (GiB): 499 Bricks:
5.2.5. Creating a Volume
After Heketi is set up, you can use the CLI to create a volume.
- Execute the following command to check the various option for creating a volume:
# heketi-cli volume create --size=<size in Gb> [options]
- For example: After setting up the topology file with two nodes on one failure domain, and two nodes in another failure domain, create a 100Gb volume using the following command:
# heketi-cli volume create --size=100 Name: vol_0729fe8ce9cee6eac9ccf01f84dc88cc Size: 100 Id: 0729fe8ce9cee6eac9ccf01f84dc88cc Cluster Id: a0d9021ad085b30124afbcf8df95ec06 Mount: 192.168.10.101:vol_0729fe8ce9cee6eac9ccf01f84dc88cc Mount Options: backupvolfile-servers=192.168.10.100,192.168.10.102 Durability Type: replicate Replica: 3 Snapshot: Disabled Bricks: Id: 8998961142c1b51ab82d14a4a7f4402d Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_8998961142c1b51ab82d14a4a7f4402d/brick Size (GiB): 50 Node: b455e763001d7903419c8ddd2f58aea0 Device: 0ddba53c70537938f3f06a65a4a7e88b …………….
- To check the details of the device, execute the following command:
# heketi-cli device info 0ddba53c70537938f3f06a65a4a7e88b Device Id: 0ddba53c70537938f3f06a65a4a7e88b Name: /dev/vdi Size (GiB): 499 Used (GiB): 201 Free (GiB): 298 Bricks: Id:0f1766cc142f1828d13c01e6eed12c74 Size (GiB):50 Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_0f1766cc142f1828d13c01e6eed12c74/brick Id:5d944c47779864b428faa3edcaac6902 Size (GiB):50 Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_5d944c47779864b428faa3edcaac6902/brick Id:8998961142c1b51ab82d14a4a7f4402d Size (GiB):50 Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_8998961142c1b51ab82d14a4a7f4402d/brick Id:a11e7246bb21b34a157e0e1fd598b3f9 Size (GiB):50 Path: /var/lib/heketi/mounts/vg_0ddba53c70537938f3f06a65a4a7e88b/brick_a11e7246bb21b34a157e0e1fd598b3f9/brick
5.2.6. Expanding a Volume
Heketi expands a volume size by using add-brick command. The volume id has to be provided to perform volume expansion.
- Find the volume id using the volume list command.
# heketi-cli volume list Id:9d219903604cabed5ba234f4f04b2270 Cluster:dab7237f6d6d4825fca8b83a0fac24ac Name:vol_9d219903604cabed5ba234f4f04b2270 Id:a8770efe13a2269a051712905449f1c1 Cluster:dab7237f6d6d4825fca8b83a0fac24ac Name:user1vol1
- This volume id can be used as input to heketi-cli for expanding the volume.
# heketi-cli volume expand --volume <volume_id> --expand-size <size>
For example:# heketi-cli volume expand --volume a8770efe13a2269a051712905449f1c1 --expand-size 30 Name: user1vol1 Size: 130 Volume Id: a8770efe13a2269a051712905449f1c1 Cluster Id: dab7237f6d6d4825fca8b83a0fac24ac Mount: 192.168.21.14:user1vol1 Mount Options: backup-volfile-servers=192.168.21.15,192.168.21.16 Block: false Free Size: 0 Block Volumes: [] Durability Type: replicate Distributed+Replica: 3
5.2.7. Deleting a Volume
To delete a volume, execute the following command:
# heketi-cli volume delete <vol_id>
For example:
$ heketi-cli volume delete 0729fe8ce9cee6eac9ccf01f84dc88cc Volume 0729fe8ce9cee6eac9ccf01f84dc88cc deleted