Technical Notes


Red Hat Gluster Storage 3.1

Detailed notes on the changes implemented in Red Hat Gluster Storage 3.1

Divya Muntimadugu

Red Hat Engineering Content Services

Bhavana Mohan

Red Hat Engineering Content Services

Abstract

The Red Hat Gluster Storage 3.1 Technical Notes list and documents the changes made to the Red Hat Gluster Storage 3.1.

Chapter 1. RHBA-2015:1845

The bugs contained in this chapter are addressed by advisory RHBA-2015:21732. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHBA-2015:1845-05.html.

ami

BZ#1250821
Previously, Amazon Machine Images (AMIs) for Red Hat Gluster Storage 3.1 on Red Hat Enterprise Linux 7 were having NFS and Samba repositories enabled by default and Red Hat Enterprise Linux 7 repositories disabled. This issue was fixed in rh-amazon-rhui-client-2.2.124-1.el7 package build and new AMI image was created to include that. Now, a new set of AMIs are uploaded in production, where the Red Hat Enterprise Linux 7 repositories will be enabled by default, and NFS and Samba repositories will be disabled (as expected) and must be manually enabled.
BZ#1253141
Previously, Amazon Machine Images (AMIs) for Red Hat Gluster Storage 3.1 on Red Hat Enterprise Linux 7 were having NFS and Samba repositories enabled by default and Red Hat Enterprise Linux 7 repositories disabled. This issue was fixed in rh-amazon-rhui-client-2.2.124-1.el7 package build and new AMI image was created to include that. Now, a new set of AMIs are uploaded in production, where the Red Hat Enterprise Linux 7 repositories will be enabled by default, and NFS and Samba repositories will be disabled (as expected) and must be manually enabled.

build

BZ#1249989
Previously, a file required by gluster-swift service was moved from glusterfs-api RPM to python-gluster RPM. As a consequence, the gluster-swift service was not started. With this fix, the required RPM (python-gluster) is made as a dependency and the gluster-swift service can be started.

gluster-afr

BZ#1227759
Previously, if a brick in a replica went down, there was a chance of drastic reduction in write speed due to extra fsyncs that happened. With the fix, this issue is resolved.
BZ#1234399
Previously, the split-brain resolution command used to perform conservative merge on directories with both 'bigger- file' and 'source-brick' options. With the fix, for directories, the 'bigger-file' option is disallowed while the 'source-brick' does a conservative merge, informing the user with the appropriate message in both cases.
BZ#1240657
Previously, AFR was logging messages about files and directories going into split-brain even in case of failures that were unrelated to split-brain. As a consequence, for each stat on a file and directory that fails, AFR would wrongly report that it is in split-brain. With this fix, AFR logs messages about split-brain only in case of a true split-brain.
BZ#1238398
Previously, the split-brain-choice was not being considered when a file is only in the metadata split-brain. As a consequence, incorrect file metadata (with fops like ls, stat) was getting displayed for files in split-brain through the mount even after you set replica.split-brain-choice. With this fix, the correct metadata is displayed.
BZ#1239021
Previously, the self-heal daemon was performing a crawl only on the brick that came up after it went down. So the pending heals were not happening immediately after the child is up, but only after the cluster.heal-timeout value. With the fix, index heal will be triggered on all local subvolumes of a replicated volume.

gluster-dht

BZ#1245565
Previously, unsynchronized memory access caused the client process to crash in certain scenarios while reading extended attributes. This has now been fixed by synchronizing access to the variable.
BZ#1234610
Previously, POSIX ACLs set on a file were copied onto the DHT linkto file created when the file was being migrated. This changed the linkto file permissions causing the file to be treated as a data file and file operations to be sent to the wrong file. With this fix, the POSIX ACLs are not set on the DHT linkto files.
BZ#1244527
Previously, the "gluster vol rebalance vol_name start" command might be hung if any nodes in a cluster go down simultaneously. With this fix, this issue is resolved.
BZ#1236038
Previously, a remove-brick start operation was reporting success even if the glusterd service was not running on the node that hosts the removed brick. Due to this, the user could still perform a remove-brick commit even though the rebalance has not been triggered and this was resulting in data loss. With this fix, remove-brick start and commit would fail if the glusterd service is not running on the node that hosts the removed brick.
BZ#1243542
Previously, when a distribute leg of a dist-rep Gluster volume that hosts VM images was removed using the `remove-brick start` Gluster CLI, the VMs went into a paused state. With the fix, they do not go into a paused state.

gluster-quota

BZ#1251425
Previously, the error "patchy-marker: lookup failed for gfid:00000000-0000-0000-0000-000000000001/.glusterfs: Operation not permitted" was seen in the brick logs. Now, the marker is optimized and this error is fixed.
BZ#1229621
Previously, when the disk quota exceeded, "could not read the link from the gfid handle /rhs/brick1/b1/.glusterfs/a3/f3/a3f3664f-df98-448e-b5c8-924349851c7e (No such file or directory)" error was seen in the brick logs. With this fix, these errors are not logged.
BZ#1251457
Previously, a lot of errors were logged in the brick logs. After marker's re-factoring, these errors are fixed.
BZ#1065651
Previously, quota expected users to enter the absolute path, but no proper error was displayed if the absolute path was not provided. With this fix, if the absolute path is not provided, it throws "Please enter the absolute path" error.
BZ#1238071
Previously, upon restarting glusterd, quota daemon was not started when more than one volume was configured and quota is enabled only on the second volume. With this fix, the quota daemon starts on node reboot.
BZ#1027723
Previously, when the 'gluster volume reset VOLNAME' command was executed, the features.quota-deem-statfs option was set to default value. Wtth this fix, when you execute 'gluster volume reset VOLNAME' command, features.quota-deem-statfs option is not changed. Setting and resetting of this option is allowed only through 'gluster volume quota VOLNAME enable/disable' command.
BZ#1064265
Previously, the gluster CLI allowed to set the soft-limit value of quota to greater than 100%. With this fix, soft-limit value is validated to allow only in the range of 1-100%.
BZ#919235
Previously, too many log files were created as ENOENT and ESTALE was set to WARNING log level. After the marker re-factoring, ENOENT and ESTALE was set to DEBUG level and log file creation have reduced drastically.
BZ#1236672
Previously, brick crashed when create, write, and remove operations were performed in parallel on a quota enabled volume. With this fix, brick does not crash even if create, write, and remove operations are performed parallel.
BZ#1238049
Previously, brick crashed during parallel rename and write happens or during continuous rename operation. With this fix, this issue is resolved.
BZ#1245542
Previously, if unlink was performed while the update transaction was still in progress, then it logged an error, which was due to ENOENT. Now, ENOENT errors are not logged.
BZ#1229606
Previously, the error "Failed to check quota size limit" was displayed when the disk quota was exceeded. With this fix, this error is not displayed
BZ#1241807
Previously, brick crashed during parallel rename and write operation on a quota enabled volume. With this fix, brick does not crash even if write and rename operations are performed in parallel.
BZ#1242803
Previously, executing 'gluster volume quota list' command use to hang if quotad was not running. With this fix, "Connection failed. Please check if quota daemon is operational." error message is displayed.

gluster-swift

BZ#1255308
Previously, 'Content-Length' of the object which was stored as metadata in the extended attribute was not validated with actual size of object during processing of a GET request. As a consequence, when an object was modified from file interface and later accessed over Swift interface, the Swift client used to either receive incomplete/inconsistent data or the request used to fail entirely. As a fix, now a check is made if the 'Content-Length' stored as metadata is same as the actual size of the object. If not same, the stored metadata is invalidated and the stored Content-Length is updated. Wit this fix, the entire object data is returned to the client and the request completes successfully.
BZ#1238116
Previously, when there was an error and exception was raised, in some cases, open file descriptors were not closed. This resulted in file descriptors being leaked. With this fix, exceptions raised are caught first, file descriptor are closed and then the original exception caught is re-raised.
BZ#1251925
Previously, .trashcan directory which is always present in the root of the volume was considered to be a container and was returned in the container listing. With this fix, .trashcan directory is no longer returned in the container listing.

glusterfs

BZ#1255471
Previously, on certain occasions, the libgfapi returned incorrect errors. NFS-Ganesha would handle the incorrect error in such a way that the procedures were retried. However, the used file descriptor should have been marked as bad, and no longer used. As a consequence, using a bad file descriptor caused access to memory that was freed and made NFS-Ganesha segfault. With this fix, libgfapi returns correct errors and marks the file descriptor as bad if the file descriptor should not be used again. Now, NFS-Ganesha does not try to reuse bad file descriptors and prevents segmentation faults.
BZ#1244415
Previously, glusterd was not fully initializing its transports when using management encryption. As a consequence, an unencrypted incoming connection would cause glusterd to crash. With this fix, the transports are now fully initialized and additional checks have been added to handle unencrypted incoming connections. Now, glusterd no longer crashes on incoming unencrypted connections when using management encryption.
BZ#1252359
Previously, volumes had to be remounted to benefit from change in the network.ping-timeout value for the respective volume. With this fix, the network.ping-timeout values take effect even without remounting the volume.
BZ#1213893
Previously, the brick processes did not consider rebalance processes to be trusted clients. As a consequence, if the auth.allow option was set for a volume, connections from the rebalance processes for that volume were rejected by the brick processes, causing rebalance to hang. With this fix, the rebalance process is treated as a trusted client by the brick processes. Now, the rebalance works even if the auth.allow option is set for a volume.
BZ#1238977
Previously, if a bad file was detected by scrubber, then the scrubber was marking the bad file information as an INFO message in the scrubber log. With this fix, the scrubber will mark a bad file as an ALERT message in the scrubber log.
BZ#1241385
Previously, the attribute '--output-prefix' specified in 'glusterfind pre' command did not provide the output prefix in case of deleted entry present in the glusterfind output file. With the fix, the output prefix of the deleted entry is present in the glusterfind output file.
BZ#1232569
Previously, the main status file related to the session was maintained in the node where the session was created. As a consequence, when "Glusterfind list" is executed on any node other than the node on which the session was created, the health of the session was shown as "corrupted." With this fix, the glusterfind does not to show "corrupted" when it does not find the status file in any other node except the node where the session is created. Now, the 'Glusterfind list' command will list the sessions only from the main node where the session is created.
BZ#1228135
Previously, all bitrot commands that use the "gluster volume set volname *" command to start or stop bitd and scrub daemon that set any value for bitrot and crubber daemon succeeded. But gluster did not support "gluster volume set volname*" command to reconfigure the BitRot options. Due to this, when the gluster volume set volname command is executed, then the bitrot and scrub daemon crashed. With this fix, gluster accepts only “gluster volume bitrot VOLNAME *” commands for bitrot and scrub operations.
BZ#1234213
Previously, on executing the 'glusterfind delete' command, the user was presented with password prompts for peer nodes. This was due to the peer node SSH key, set up for password-less SSH, getting deleted on local node before the keys got deleted on peer nodes. As a consequence, password prompts got displayed for all peer nodes all at once. The user had to enter passwords for all peer nodes as many times as there are peer nodes in the cluster. With this fix, checks have been added to avoid deleting SSH keys on local node before deleting them on peer node. The SSH keys on local node eventually get deleted as part of session cleanup. Now, the password prompts are no longer presented on executing a 'glusterfind delete' command.

glusterfs-geo-replication

BZ#1232216
Previously, if a meta-volume is configured, there was a small race window, where geo-replication worker access the unreferenced fd of the lock file maintained in shared storage volume. As a consequence, the geo-replication worker died and restarted. A fix is made in worker to always get the right fd. Now, the geo-replication worker does not die and restart.
BZ#1239075
Previously, the geo-replication worker was not re-trying the operation on ESTALE error lstat of an entry. As a consequence, the geo-replication worker was getting crashed and restarted. With this fix, geo-replication worker does not crash on ESTALE during lstat on the entry.
BZ#1236546
Previously, both ACTIVE and PASSIVE geo-replication workers registered to changelog at almost the same time. When PASSIVE worker becomes ACTIVE, the start and end time would be current stime and register_time respectively for history API. Hence register_time would be less then stime for which history API fails. As a consequence, passive worker, which becomes active dies for the first time. With this fix, the passive worker, which becomes active does not die for the first time.

glusterfs-server

BZ#1243722
Previously, the glusterd was not fully initializing its transports when using management encryption. As a consequence, an unencrypted incoming connection would cause glusterd to crash.As a fix, the transports are now fully initialized and additional checks have been added to handle unencrypted incoming connections.Now, glusterd no longer crashes on incoming unencrypted connections when using management encryption.
BZ#1226665
Previously, when there was no space left on the file system and when user performed any operation resulted to change in /var/lib/glusterd/* files, then the glusterd was failing to write to a temporary file. With this fix, a proper error message is displayed when /var/lib/glusterd/* is full.
BZ#1251409
Previoulsy, brick logs were filled with "server_resolve_inode (resolve_gfid+0x88) (dict_copy_with_ref+0xa4) invalid argument" error. With this fix, these logs are not seen.
BZ#1134288
Previously, on executing “gluster volume status” command, there was an error log seen in the glusterd.log file about “unable to get transaction opinfo”. With this fix, the glusterd service does not log this error.
BZ#1230532
Previously, even though the "gluster reset force" command succeeded, any daemons (bitd, quotad, scrub, shd) that are enabled on the volume were still running. With this fix, these daemons do not run after "gluster reset force" command success.
BZ#1243886
Previously, there was a huge memory leak in bricks and it consumed huge memory. With this fix, this issue is resolved.
BZ#1245536
Previously, on glusterd start, UUID was generated and stored in the /var/lib/glusterd/glusterd.info file. This information was static and identical for every instance created. As a consequence, peer probe between instances failed. With this fix, UUID will only be generated on first probe or during volume creation.
BZ#1246946
Previously, while detaching a peer from a node, the glusterd service was logging a critical message saying that it could not find the peer. With this fix, the glusterd service does not log this error.
BZ#1224163
Previously, even though the "gluster reset force" command succeeded, bitrot and scrub daemons were not stopped. With this fix, these daemons do not run after "gluster reset force" command success.
BZ#1247445
Previously, when a cluster had multiple volumes where the first volume in the volume list is not a replicated volume, and any of the other volumes is a replicated volume, after a reboot of a node, shd does not start. With this fix, shd will start in this scenario.

gstatus

BZ#1250453
The gstatus command is now fully supported. The gstatus command provides an easy-to-use, high-level view of the health of a trusted storage pool with a single command. It gathers information about the health of a Red Hat Gluster Storage trusted storage pool for distributed, replicated, distributed-replicated, dispersed, and distributed-dispersed volumes.

nfs-ganesha

BZ#1241761
Previously, when running high workloads, if the last volume exported via NFS-Ganesha is unexported, there was a memory corruption. As a consequence, the process would crash and failover to another node. Now, the memory corruption issue has been fixed and there will not be any crash.
BZ#1241871
Previously, if the mount path for NFSv3 contained symbolic links, then NFSv3 mount failed. With this fix, NFS server resolves the symbolic links in the mount path before sending to the client, hence mount will succeed.
BZ#1226817
NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool.
BZ#1238118
Previously, when DBus signals were sent multiple times in succession for a volume that is already exported, caused NFS-Ganesha service crash. With this fix, NFS-Ganesha service does not crash.
BZ#1235971
Previously, the ganesha-ha.sh --status command printed the output of "pcs status" as is on the screen. The output was not user friendly. With this fix, the output of the pcs status is formatted well and is easily understandable by the user.
BZ#1245636
Previously, one of the APIs used by the User Serviceable snapshots did not get resolved during dynamic loading due to its symbol collision with another API provided by ntirpc library used by NFS-Ganesha. As a consequence, User Serviceable Snapshots did not work with NFS-Ganesha. With this fix, the APIs are made static to avoid the symbol collisions. User Serviceable Snapshots now work with NFS-Ganesha.

redhat-storage-server

BZ#1248899
Gdeploy is a tool which automates the process of creating, formatting, and mounting bricks. When setting-up a fresh cluster, gdeploy could be the preferred choice of cluster set up, as manually executing numerous commands can be error prone. The advantages of using gdeploy includes automated brick creation, flexibility in choosing the drives to configure (sd, vd, ...) and flexibility in naming the logical volumes (LV) and volume groups (VG).
BZ#1251360
In this release, two new tuned profiles, that is, rhgs-sequential-io and rhgs-random-io for RHEL-6 has been added to Red Hat Gluster Storage.

Chapter 2. RHBA-2015:1848

The bugs contained in this chapter are addressed by advisory RHBA-2015:21398-06. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHBA-2015:1848-06.html.

gluster-nagios-addons

BZ#1196144
Previously, the nrpe service did not reload when the gluster-nagios-addons rpm was updated. Due to this, the user had to restart/reload the nrpe service to monitor the hosts properly. With this fix, the nrpe service will be automatically reloaded when the gluster-nagios-addons rpm is updated.

nagios-server-addons

BZ#1236290
Previously, the nodes were updating the older service even after the Cluster Quorum service was renamed. Due to this, the Cluster Quorum service status in Nagios was not reflected. With this fix, the plugins on the nodes are updated so that the notifications are pushed to the new service and the Cluster Quorum status is reflected correctly.
BZ#1235651
Previously, the volume status service did not provide the status of disperse and distributed dispersed volumes. With this fix, the volume status service is modified to include the logic required for interpreting the volume status of disperse and distributed dispersed volumes and the volume status is now displayed correctly.

rhsc

BZ#1250032
Previously, the dashboard reported the status of all the interfaces causing the network interfaces status that are not in use to be reported as down. With this fix, only the status of interfaces that have an ip address assigned to it are displayed in the dashboard.
BZ#1250024
Previously, the size unit conversion was handled only upto TiB units and hence the units were not correctly displayed for petabyte storage size and above. With this fix, size unit conversion is updated to handle YiB (1024^8) and the dashboard displays the units correctly.
BZ#1204924
Previously, while calculating the network utilization the effective bond speed was not taken into consideration. Due to this, the network utilization displayed to the user when the network interfaces were bonded was incorrect. With this fix, the effective bond speed is taken into consideration and the network utilization is correctly displayed even when network interfaces are bonded.
BZ#1225831
Previously, data alignment value was ignored by the python-blivet module during pvcreate, due to which the physical volume(PV) was always created using 1024 data alignment size. VDSM now uses the lvm pvcreate command to fix this issue.
BZ#1244902
Previously, editing a host protocol from xml-rpc to json-rpc and then activating the host caused the host to become non-operational due to connectivity issues. This issue is now fixed.
BZ#1224616
Previously, the Trends tab UI Plugin did not send the 'Prefer' http header as part of every REST API calls. Due to this, the existing REST API session was invalidated whenever the user clicked the Trends tab and the user is prompted to provide the user name and password again.
BZ#1230354
Previously, a proper description for the geo-replication options was not displayed in the configuration option dialog. With this fix, the correct descriptions are displayed.
BZ#1230348
Previously, the storage devices were not synced to Red Hat Gluster Storage Console for a maximum of two hours when user adds the hosts. Due to this, the user had to sync the devices manually by clicking the 'Sync' button to view the storage devices after adding hosts to the console. With this fix, storage devices from the host will be synced automatically whenever the user activates, adds, or re-installs the host in the UI.
BZ#1236696
Previously, when a volume was restored to the state of one of its snapshots, the dashboard used to display brick delete alerts. This happened as part of snapshot restore where the existing bricks were removed and new bricks were added with a new mount point. The sync job generated an alert for this operation. With this fix, brick delete alerts are not generated after restoring the volume to the state of a snapshot.
BZ#1234357
Previously, as Red Hat Gluster Storage Console does not support cluster level option operations (set and reset), the user did not have a way to set cluster.enable-shared-storage volume option from the console. With this fix, this volume option is set automatically by the console when a new node is added to a volume that is participating as a master of a geo-replication session.
BZ#1244714
Previously, due to an issue in the code that took care of time zone wise conversion of execution time for volume snapshot schedule, the schedule execution time used to be off by 12 hours. For example, if the execution time is scheduled as 10:00 AM, it was set as 10:00 PM. With this fix, the time zone wise conversion logic of execution time for volume snapshot schedule is corrected.
BZ#1244865
Previously, when bricks were created using the UI, the xfs file system was created with the inode size set to 256 bytes rather than the recommended 512 bytes for disk types other than RAID6 and RAID10. This has now been fixed to use the recommended 512 bytes size.
BZ#1240231
Previously, if the gluster meta volume was deleted from the CLI and added back again, Red Hat Gluster Storage Console did not trigger the disabling of CLI based volume snapshot scheduling again. With this fix, the gluster sync job in the console is modified such that, even if the meta volume gets deleted and created back again, the console will explicitly disable the CLI based snapshot schedule.

rhsc-monitoring-uiplugin

BZ#1230580
Previously, when a brick in the dispersed volume was down, the status of the volume was displayed as partially available even though the volume was fully available. With this fix, the logic to handle the distributed disperse volume types is corrected and the dashboard now displays the status for dispersed and distributed disperse volume correctly.

vdsm

BZ#1231722
Previously, due to an issue with an exception handling in VDSM and Engine, using an existing mount point while creating a brick gave unexpected exception in the UI. With this fix, the correct error message is displayed when the given mount point is already in use.

Chapter 3. RHSA-2015:1495-10

The bugs contained in this chapter are addressed by advisory RHBA-2015:1495-10. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHSA-2015:1495-10.html.

distribution

BZ#1223238
The glusterfs packages have been upgraded to upstream version 3.7.1, which provides a number of bug fixes and enhancements over the previous version.
BZ#1123346
With this release of Red Hat Gluster Storage Server, you can install and manage groups of packages through the groupinstall feature of yum. By using yum groups, system administrators need not manually install related packages individually.

gluster-afr

BZ#1112512
Previously, when replace-brick commit force operation was performed, there was no indication of pending heals on the replaced-brick. As a result, if operations succeeded on the replaced brick before its healing and the brick was marked as source, there was a potential for data-loss. With this fix, it is ensured that the replaced brick is marked as sink so that it is not considered as a source for healing till it has a copy of the files.
BZ#1223916
Previously, when rebalance was in progress the brick processes crashed. With this fix, this isuue is resolved.
BZ#871727
Previously, when self-heal is triggered by shd, it did not update the read-children. Due to this, if the other brick dies then the VMs go into paused state as mount assumes all read-children are down. With this fix, this issue is resolved and it repopulates read-children using getxattr.

gluster-dht

BZ#1131418
Previously, when the gf_defrag_handle_hardlink function was executed, the setxattr was performed on the internal AFR keys too. This lead to AFR aborting the operation with the following error, which resulted in hard link migration failures:
operation not supported
Copy to Clipboard Toggle word wrap
With this fix, setxattr is performed only on the required keys.
BZ#1047481
Previously, the extended attributes set on a file while it was being migrated were not set on the destination file. Once migration is complete, the source file is deleted causing those extended attributes to be lost. With this fix, the extended attributes set on a file while it is being migrated are now set on the destination file as well.

gluster-quota

BZ#1171896
Previously, when a child directory was created within parent directory, on which the quota is set, executing the df command displayed the size of the entire volume. With this fix, this issue is resolved and executing the df command displays the size of the directory.
BZ#1021820
Previously the quotad.socket file existed in the /tmp folder. With this release, the quotad.socket file is moved to /var/run/gluster.
BZ#1039674
Previously, when quotad was restarted as part of add/remove brick, resulted in 'Transport endpoint Not Connected' error in the I/O path. With this fix, this issue is resolved.
BZ#1034911
Previously, when setting the quota limit on an invalid path resulted in the following error message that did not clearly indicate that a path relative to the gluster volume is required.
Failed to get trusted.gfid attribute on path /mnt/ch2/quotas. 
Reason : No such file or directory
Copy to Clipboard Toggle word wrap
With this fix, the following error message is displayed that is more clear: please enter the path relative to the volume.
BZ#1027693
Previously, the features.quota-deem-statfs volume option was on even when quota was disabled. With this fix, features.quota-deem-statfs is turned off when quota is turned off.
BZ#1101270
Previously, setting a quota limit value between 9223372036854775800 to 9223372036854775807, which was close to the supported value of 9223372036854775807 would fail. With this fix, setting the quota limit value between 0 - 9223372036854775807 is successful.
BZ#1027710
Previously, the features.quota-deem-statfs volume option was off by default when quota is enabled. With this fix, features.quota-deem-statfs is turned on by default when quota is enabled. To disable quota-deem-statfs execute the following command:
# gluster volume set volname quota-deem-statfs off
Copy to Clipboard Toggle word wrap
BZ#1023416
Previously, when setting the limit usage to 1B failed. With this fix, the issue is resolved.
BZ#1103971
Previously, when a quota limit of 16384PB was set, the quota list output for Soft-limit exceeded and Hard-limit exceeded values was wrongly reported as Yes. With this fix, the supported quota limit range is changed to (0 - 9223372036854775807) and the quota list provides the correct output.

gluster-smb

BZ#1202456
Previously, with case sensitive = no and preserve case = yes options set in /etc/samba/smb.conf, renaming a file to case in-sensitive match of existing file name would succeed without warning or error. This lead to two files with the same name being shown in a directory and only one of them being accessible. With this fix, this issue is resolved and the user is warned of an existing file of same name.

gluster-snapshot

BZ#1203159
A new volume from a snapshot can now be created. To create a writable volume from a snapshot, execute the following command:
# gluster snapshot clone clonename snapname
Copy to Clipboard Toggle word wrap
The clonename becomes the volname of the newly created volume.
BZ#1181108
When a snapshot is created, the current time-stamp in GMT is appended to its name. Due to this, the same snapshot name can be used by multiple snapshots. If a user does not want to append the timestamp with the snapshot name, the no-timestamp option in the snapshot create command can be used.
BZ#1048122
Previously, the snapshot delete command had to be executed multiple times to delete more than one snapshot. Two new commands are now introduced that can be used to delete multiple snapshots. To delete all the snapshots present in a system, execute the following command:
# gluster snapshot delete all
Copy to Clipboard Toggle word wrap
To delete all the snapshot present in a specified volume, execute the following command:
# gluster snapshot delete volume volname
Copy to Clipboard Toggle word wrap

glusterfs

BZ#1086159
Previously, the glusterd service crashed when peer-detach command was executed while the snapshot-create command was underway. With this fix, glusterd does not crash on executing the snapshot-create command.
BZ#1150899
In Red Hat Storage Gluster 3.1, system administrators can create/configure and use Dispersed Volumes. Dispersed Volumes allow the recovery of the data stored on one or more bricks in case of failure. It requires less storage space when compared to a replicated volume.
BZ#1120592
Previously, there was an error while converting replica volume to distribute volume by reducing replica count to one. With this fix, this issue is resolved replica volume can be converted to distribute volume by reducing the replica count to one.
BZ#1121585
Previously, when remove-brick operation was performed on a volume and then the remove-brick status was executed to check the status of non-existant bricks on the same volume, it displayed the status for these bricks without checking the validity of these bricks. With this fix, remove-brick status checks if the brick details are valid before displaying the status. If the brick details are invalid the following error is displayed.
Incorrect brick brick_name for volume_name
Copy to Clipboard Toggle word wrap
BZ#1238626
Previously, unsynchronized memory management between threads caused the glusterfs client process to crash when one thread tried to access memory that had already been freed by another thread. With this fix, access to the memory location is now synchronized across threads.
BZ#1203901
Previously, the Gluster NFS server failed to process RPC requests because of certain deadlocks in the code. This occured when there were frequent disconnections post the I/O operations from the NFS clients. Due to this, NFS clients or the mount became unresponsive. With this release, this issue is resolved and the NFS clients are responsive.
BZ#1232174
In Red Hat Gluster Storage 3.1, system administrators can identify bit rot, i.e. the silent corruption of data in a gluster volume. With BitRot feature enabled, the system administrator can get the details of the files that are corrupt due to hardware failures.
BZ#826758
With this release of Red Hat Gluster Storage, system administrators can create tiered volumes (fast and slow tiers) and the data is placed optimally between the tiers. Frequently accessed data are placed on faster tiers (typically on SSDs) and the data that is not accessed frequently is placed on slower disks automatically.
BZ#1188835
Previously, gluster command would log messages of DEBUG and TRACE log levels in /var/log/glusterfs/cli.log. This would cause the log file grow large quickly. With this release, it would log only messages of log levels INFO or higher precedence. This reduces the rate at which the /var/log/glusterfs/cli.log size grows.
BZ#955967
Previously output message of the command 'gluster volume rebalance volname start/start force/fix-layout start' was ambiguous and poorly formatted.
"volume rebalance: volname: success:
Starting rebalance on volume volname has been successful."
Copy to Clipboard Toggle word wrap
With this fix the output message while executing the rebalance command is more clear:
volume rebalance: volname: success: Rebalance on volname has been started Successfully. Use rebalance status command to check status of the rebalance process.
Copy to Clipboard Toggle word wrap
BZ#962570
Previously Red Hat Gluster Storage did not have a cli command to display a volume option which was set through volume set command. With this release, a cli command is included to display a configured volume option using the following command:
# gluster volume get VOLNAME OPTION
Copy to Clipboard Toggle word wrap
BZ#826768
With this release of Red Hat Gluster Storage, gluster volumes are enabled for any industry standard backup application. Glusterfind is a utility that provides the list of files that are modified between the previous backup session and the current period. The commands can be executed at regular intervals to retrieve the list. Multiple sessions for the same volume can be present for different use cases. The changes that are recorded are, new file/directories, data/metadata modifications, rename, and deletes.

glusterfs-devel

BZ#1222785
Previously, transport related error messages were displayed on the terminal even when qemu-img create command ran successfully. With this release, there are no transport related error messages on the terminal when qemu-img create command is successful.

glusterfs-fuse

BZ#1122902
Previously, certain non-English locales caused an issue with string conversions to floating point integers. Due to the conversion failures it resulted in a critical error, which caused the GlusterFS native client failing to mount a volume. With this release, the FUSE daemon now uses US/English as a locale to convert strings to floating point numbers. The non-English locales can now mount Gluster volume with the FUSE client.

glusterfs-geo-replication

BZ#1210719
Previously, Stime extended attribute used to identify the slave time till the Slave Volume was in sync. Stime was then updated after processing one batch of changelogs. Due to this, if Batch size was large and Geo-replication worker fails before completing one batch, worker had to reprocess all the changelogs again. With this fix, Batch size is limited based on the size of Changelog file. hence, when geo-replication worker crashes and restarts geo-replication re-processes only a small number of changelog files.
BZ#1240196
Previously, a node which was part of a cluster but not of master volume was not ignored in all the validations done during geo-rep pause. Due to this, the geo-rep pause failed when one or more nodes of cluster were not part of master volume. With this release, the nodes that are part of cluster and not part of master volume are ignored in all the validations done during geo-rep pause. Geo-rep pause now works even when one or more nodes in cluster are not part of master volume
BZ#1179701
Previously, when a new node was added to Red Hat Gluster Storage node, Historical Changelogs were not available. Due to issue in comparing the xtime, Hybrid crawl missed few files to sync. With this fix, Xtime compare logic in Geo-replication is fixed in Hybrid Crawl and it does not miss any files to sync to Slave.
BZ#1064154
Previously, brick down cases were incorrectly handled and as a result corresponding geo-rep active worker was falling back to xsync mode and the switch from active to passive did not happen. Due to this, file sync did not start until the brick was up and the zero byte xsync file kept getting generated. With this release, shared meta volume is introduced for better handling of brick down scenarios which helps in switching of geo-rep workers properly. Files, now, continue to sync if brick is down from the corresponding geo-rep worker of replica brick and no zero byte xsync files are seen.
BZ#1063028
Previously, Geo-replication ignored POSIX ACLs during sync. Due to this, POSIX ACLs were not replicated to Slave Volume from the Master Volume. In this release, an enhancement is made to Geo-replication to sync POSIX ACLs from the Master Volume to the Slave Volume.
BZ#1064309
Previously, single status file was maintained per node for all the Geo-replication workers. Due to this, if any one worker goes faulty the node status goes faulty. With this release, separate status file is maintained for each geo-replication workers per node.
BZ#1222856
Previously, when DHT could not resolve a GFID or path, it raised an ESTALE error similar to ENOENT error. Due to unhandled ESTALE exception, Geo-rep worker would crash and the tracebacks are printed in the log files. With this release, the ESTALE errors in Geo-rep worker is handled similar to the ENOENT errors and Geo-rep worker does not crash due to this.
BZ#1056226
Previously, user set xattrs are not synced to the slave as Geo-replication does not process SETXATTR fops in changelog and in the hybrid crawl. With this release this issue is fixed.
BZ#1140183
Previously, concurrent renames and node reboots resulted in the slave having both the source and the destination of file, with destination being 0 byte sticky file. Due to this, Slave volume contained old data file and new file being zero byte sticky bit file. With this fix, the introduction of shared meta volume to correctly handle brick down scenarios along with enhancements in rename handling resolves this issue.
BZ#1030256
Previously, brick down cases were incorrectly handled and as a result, corresponding geo-rep active worker failed back to xsync and was never switching to Changelog mode when the brick came back. Due to this, files might fail to sync to slave. With this release, shared meta volume is introduced for better handling of brick down scenarios which helps in switching of geo-rep workers properly. Files, now, continue to sync if brick is down from the corresponding geo-rep worker of replica brick
BZ#1002026
Previously, when a file was renamed, if the hash of the renamed file falls in different brick than the brick in which file was created, the Changelog of new brick records RENAME. The original brick will then have the CREATE entry in its Changelog file. Since each geo-rep worker(one per brick) syncs data independent of other workers, RENAME got executed before CREATE. With this release, all the changes are processed sequentially by Geo-replication.
BZ#1003020
Previously when hard links were being created, in some scenarios, the gsyncd would crash with an invalid argument. After which it would restart and resume the operation normally. With this fix, the possibility of such a crash is drastically reduced.
BZ#1127581
Previously, when changelog was enabled in a volume, it generated changelog file once in every rollover-time (15 second), irrespective of whether any operation is run on the brick or not. This led to a lot of empty changelogs generated for a brick. With this fix, any empty changelogs are discarded and only the changelogs that has some file I/O operations is maintained.
BZ#1029899
Previously, Checkpoint target time compared incorrectly with stime xattr. Due to this, when Active node went down the Checkpoint status displayed as Invalid. With this fix, Checkpoint status is displayed as N/A if Geo-replication status is not Active.

glusterfs-server

BZ#1227168
Previously, glusterd could crash if remove-brick-status command was executed while the remove-brick process was notifying glusterd about data migration completion on the same node. With this release, glusterd doesn't crash independent of when the remove-brick-status command was executed.
BZ#1213245
Previously, if peer probe is executed using IPs then volume creation was also done using IPs. With this release peer probe can be done using IP's and volume creation can be done using host name and vice-versa.
BZ#1102047
The following new command is introduced to retrieve the current op-version of the Red Hat Gluster Storage node. # gluster volume get volname cluster.op-version
BZ#1227179
Previously, when NFS service is disabled on all running Red Hat Gluster Storage volumes, glusterd would try connecting to gluster-nfs process, resulting in repeated connection failure messages in glusterd logs. With this release, there is no repeated connection failure messages in glusterd logs.
BZ#1212587
In this release, name resolution and the method used to identify peers has been improved. Previously, GlusterD could not correctly match addresses to peers when a mixture of FQDNs, shortnames and IPs were used, leading to command failures. With this enhancement, GlusterD can match addresses to peers even when using a mixture of address types.
BZ#1202237
Previously, in a multi node cluster, if gluster volume status and gluster volume rebalance status are executed from two different nodes concurrently, glusterd daemon could crash. With this fix, this issue is resolved.
BZ#1230101
Previously, glusterd crashed when performing a remove brick operation on a replicate volume after shrinking the volume from replica nx3 to nx2 and from nx2 to nx1. This was due to an issue with the subvol count (replica set) calculation. With this fix glusterd does not crash after shrinking the replicate volume from replica nx3 to nx2 and from nx2 to nx1.
BZ#1211165
Previously, brick processes had to be restarted for read-only option to take effect on a Red Hat Gluster Storage volume. With this release, read-only option takes effect immediately after setting it on a Red Hat Gluster Storage volume and the brick processes do not require a restart.
BZ#1211207
In Red Hat Gluster Storage 3.1, GlusterD uses userspace-rcu to protect the internal peer data structures.
BZ#1230525
Previously, in a multi node cluster, if gluster volume status and gluster volume rebalance status are executed from two different nodes concurrently, glusterd daemon could crash. With this fix, this issue is resolved.
BZ#1212160
Previously, executing volume-set command continuously could exhaust privileged ports in the system. Subsequent gluster commands could fail with "Connection failed. Please check if gluster daemon is operational" error. With this release, gluster commands do not consume a port for volume-set command and do not fail when run continuously.
BZ#1223715
Previously, when the gluster volume status command was executed, glusterd showed the brick pid even when the brick daemon was offline. With this fix, the brick pid is not displayed if the brick pid is offline.
BZ#1212166
Previously, GlusterD did not correctly match the addresses to peers when a combination of FQDNs, shortnames, and IPs were used, leading to command failures. With this enhancement, GlusterD is able to match addresses to peers even when using a combination of address types.
BZ#1212701
Previously, there was a data loss issue during replace brick operation. In this release, replace-brick operation with data migration support has been deprecated from Gluster. With this fix, replace brick command will support only one commad gluster volume replace-brick VOLNAME SOURCE-BRICK NEW-BRICK {commit force}
BZ#874745
With this release of Red Hat Gluster Storage, SELinux is enabled. This enforces mandatory access-control policies for user programs and system services. This limits the privilege of the user programs and system services to the minimum required, thereby reducing or eliminating their ability to cause harm.

nfs-ganesha

BZ#1224619
Previously, deleting a node was intentionally made disruptive. It removed the node from the Highly Available (HA) cluster and deleted the virtual IP address (VIP). Due to this, any clients that have NFS mounts on the deleted node(s) experienced I/O errors. With this release, when a node is deleted from the HA cluster, clients must remount using one of remaining valid VIPs. For a less disruptive experience, a fail-over can be initiated by administratively killing the ganesha.nfsd process on a node. The VIP will move to another node and clients will seamlessly switch.
BZ#1228152
In this release, the Parallel Network File System (pNFS) is part of the NFS v4.1 protocol that allows compute clients to access storage devices directly and in parallel. The pNFS cluster consists of MDS(Meta-Data-Server) and DS (Data-Server). The client sends all the read/write requests directly to DS and all other operations are handle by the MDS. pNFS support is now available with nfs-ganesha-2.2.1* packages.
BZ#1226844
In this release, ACLs are disabled by default as performance degradation and ACL is not fully supported in ganesha community. To enable ACLs, users should change the configuration file.
BZ#1228153
Previously, the logs from FSAL_GLUSTER/gfapi were saved in the "/tmp" directory. Due to this, the logs would get lost when /tmp gets cleared. With this fix, nfs-ganesha will now log to /var/log/ganesha-gfapi.log and troubleshooting is much easier due to the availability of a longer history.

redhat-storage-server

BZ#1234439
Previously Red Hat Gluster Storage performed optimization that was one vendor specific to MegaRAID controller. This caused unsupported/wrong settings. With this release, this optimization is removed to support wider hardware RAID controller.

rhs-hadoop

BZ#1093838
Previously, directory with many small files were lining up files in the listing by brick. As a consequence, there was a decrease in performance of Hadoop jobs as the files were processed in the order of the listing. The job was focusing on a single brick at a time. With this fix, the files are sorted by directory listing and not by brick to enhance the performance.

rhs-hadoop-install

BZ#1062401
The previous HTB version of the scripts have been significantly rewritten to enhance the modularity and supportability. With some basic understanding of shell command syntax, you can use the auxiliary supporting scripts available at bin/add_dirs.sh and bin/gen_dirs.sh directories.
BZ#1205886
Previously, in a cluster, if few nodes had similar names, some of the nodes could be inadvertently skipped. With this fix, all the nodes are processed regardless of naming similarities.
BZ#1217852
Previously, the hdp 2.1 stack was hard-coded and hence only the hdp 2.1 stack was visible. With this fix, all glusterfs enabled hdp stacks will be visible in the Ambari installation wizard.
BZ#1221344
Previously, users in the hadoop group were unable to write to hive directory. With this fix these users can now write to hive directory.
BZ#1209222
Previously, setting entry-timeout=0 eliminated some caching and decreased the performance. But this was the only setting which worked due to a bug in the vfs kernel. With this fix, and the fact that the vfs bug has also been fixed, eliminating setting of the entry-timeout and attribute-timeout options (and thus using their default values) provides better performance.
BZ#1162181
Previsouly, usage of https for Ambari was not supported. As a consequence, enable_vol.sh and disable_vol.sh failed. With this fix, the user can chose to use either http or https with Ambari and the scripts automatically detect this.

vulnerability

BZ#1150461
A flaw was found in the metadata constraints in OpenStack Object Storage (swift). By adding metadata in several separate calls, a malicious user could bypass the max_meta_count constraint, and store more metadata than allowed by the configuration.

Chapter 4. RHEA-2015:1494-08

The bugs contained in this chapter are addressed by advisory RHEA-2015:1494-08. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHEA-2015:1494-08.html.

gluster-nagios-addons

BZ#1081900
Previously, there was no way to alert the user when split-brain is detected on a replicate volume. Due to this, users did not know the issue to take timely corrective action. With this enhancement, the Nagios plugin for self-heal monitoring has been enhanced to report if any of the entries are in split-brain state. Plugin has been renamed from Volume Self-heal to "Volume Split-brain status".
BZ#1204314
Previously, the Memory utilization plugin was not deducting cached memory from used memory. This caused Nagios to alert for a low memory condition when none actually exists. With this fix, the used value is obtained by deducting cached value from it. Now, the Memory utilization plugin will return the correct value for used memory and hence there will be no false low memory alerts.
BZ#1109744
Previously, there was a misleading notification message that quorum is lost for only one volume even if multiple volumes have lost quorum. With this fix, the notification message is corrected to inform the user that the quorum is lost on the entire cluster.
BZ#1231422
Previously, due to an issue in the package nrpe, the data got truncated while transferring and there was an invalid data available in the rrd database. This caused failures in pnp4nagios charts display. With this fix, the pnp4nagios charts work properly as the entire data gets transferred from nrpe server.

nagios-server-addons

BZ#1119273
Previously, when CTDB was configured and functioning, stopping ctdb service on a node displayed the status of ctdb service as 'UNKNOWN' and the status information was displayed as 'CTDB not Configured' instead of showing a proper critical error message. Due to this wrong error message, user might think that the CTDB is not configured. With this fix, this issue is resolved and correct error messages are displayed.
BZ#1166602
Previously, when glusterd was down on all the nodes in the cluster, the status information for volume status, self-heal, geo-rep status were improperly displayed as "temporary error" instead of "no hosts found in cluster" or "hosts are not up". As a consequence, this confused the user to think that there are some issues with volume status, self-heal, Geo-replication and that needs to be fixed. With this fix, when the glusterd is down in all the nodes of the cluster, Volume Geo Replication ,Volume status,Volume Utilization status will be displayed as "UNKNOWN" with status information "UNKNOWN: NO hosts(with state UP) found in the cluster". The brick status will be displayed as "UNKNOWN" with status information as "UNKNOWN: Status could not be determined as glusterd is not running".
BZ#1219339
Previsouly, the NFS Service, which was running as part of Gluster was shown as 'NFS' in Nagios. In this release, another NFS service called 'NFS Ganesha' is introduced. Hence, displaying only 'NFS' may confuse the user. With this enhancement, the NFS service in Nagios is renamed to 'Gluster NFS'.
BZ#1106421
Previously, the Quorum status was a passive check. As a consequence, the Plugin status is displayed as Pending even if there is no issues with Quorum or quorum is not enabled. With this fix, a freshness check is added. If the plugin is not updated or results are stale by an hour, the freshness check is executed to update the plugin status. If there are no volumes with quorum enabled, the plugin status is displayed as UNKNOWN.
BZ#1177129
Previously, the Nagios plugin monitored if glusterd process is present. As a consequence, the Plugin returned OK status even if the glusterd process is dead but the pid file existed. With this fix, the plugin is updated to monitor glusterd service state and the glusterd service status is now reflected correctly.
BZ#1096159
Previously, the logic for determining volume status was based on the brick status and volume type. But the volume type was not displayed in the service status output. With this fix, the Volume type is shown as part of the volume status info.
BZ#1127657
Previously, the command 'configure-gluster-nagios' which is used to configure Nagios services asks the user to enter Nagios server address (either IP/FQDN), but it did not verify the correctness of the same. As a consequence, the user could enter invalid address and end up configuring Nagios with wrong information. With this fix, command 'configure-gluster-nagios' verifies the address entered by the user to make sure that Nagios is configured correctly to monitor RHGS nodes.

rhsc

BZ#1165677
Now Red Hat Gluster Storage Console supports RDMA transport type for volumes. You can now create and monitor RDMA transport volumes from the Console.
BZ#1114478
With this release of Red Hat Gluster Storage Console, system administrators can install and manage groups of packages through the groupinstall feature of yum. By using yum groups, system administrators need not manually install related packages individually.
BZ#1202731
Previously, the dependency on the rhsc-log-collector package was not specified and hence, the rhsc-log-collector was not updated on running the rhsc-setup command. With this fix, rhsc specification file has been updated and now the rhsc-log-collector package is updated upon running the rhsc-setup command.
BZ#1062612
Previously, when Red Hat Storage 2.1 Update 2 nodes were added to 3.2 cluster, users were allowed to perform rebalance and remove-brick operations which are not supported in 3.2 cluster. As a consequence, further volume operations were not allowed as the volume is locked. With this fix, an error message is displayed when users execute the rebalance and remove brick commands in version 3.2 cluster.
BZ#1105490
Previously, the cookies were not marked secure. As a consequence, cookies without Secure flag is allowed to be transmitted through an unencrypted channel which makes it susceptible to sniffing. With this fix, all the required cookies are marked as secure.
BZ#1233621
Striped Volume types are no longer supported in Red Hat Gluster Storage. Hence, the stripe volume types' options are no longer listed during volume creation.
BZ#1108688
Previously, an image in the Nagios home page was not transferred via SSL and the Security details displayed "Connection Partially Encrypted" message. With this fix, Nagios news feed that contained non-encrypted image is changed and this issue no longer occurs.
BZ#1162055
Now Red Hat Gluster Storage Console can manage and monitor clusters, which are not in the same datacenter, where Red Hat Gluster Storage Console is running. With this enhancement, it can manage the Red Hat Gluster Storage cluster running in remote data center and support Geo-replication feature.
BZ#858940
Red Hat Gluster Storage now runs with SELinux in enforcing mode, and it is recommended that users have setup SELinux correctly. An enhancement has been made to alert users if SELinux is not in enforcing mode. Console now alerts the user if SELinux is in the permissive or disabled mode, and the alerts are shown every hour.
BZ#1224281
An enhancement has been made to allow users to separate management and data traffic from Console. This ensures that the management operations are not disrupted by data traffic and vice versa. This enhancement also provides better utilization of network resources.
BZ#1194150
Previously, only TCP ports were monitored. For RDMA only volumes, TCP port is not applicable and these bricks were marked offline. With this fix, both RDMA and TCP ports are monitored and the bricks reflect the correct status.
BZ#850458
Red Hat Gluster Storage Console now supports Geo-replication feature. Geo-replication provides a distributed, continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet. You can now setup geo-replication session, perform geo-replication operations, and also manage source and destination volumes through the Console.
BZ#850472
Red Hat Gluster Storage Console now supports the Snapshot feature. The snapshot feature enables you to create point-in-time copies of Gluster Storage volumes, which you can use to protect data. You can directly access read-only Snapshot copies to recover from accidental deletion, corruption, or modification of the data. Through Red Hat Gluster Storage Console, you can view the list of snapshots and snapshot status, create, delete, activate, deactivate and restore to a given snapshot.
BZ#960069
Previously, the xattrs and residual .glusterfs files were present for previously used bricks. As a consequence, the previously used bricks to create a new volume failed from Red Hat Gluster Storage Console. With this fix, force flag has been added in the option in the UI to pass "force" flag to the volume create command that clears xattrs and allows the bricks to be reused.
BZ#1044124
Previously, the host list was not sorted and displayed in random order in the Hosts drop-down list. With this fix, the Hosts in the hosts drop-down list of Add Brick dialog are now sorted and displayed in order.
BZ#1086718
Previously, the redhat access plugin related answers were not getting written to the answer file during rhsc setup. With this fix, the redhat-access-plugin-rhsc and rhsc-setup-plugin writes the answers to the answer file and does not ask redhat access plugin related questions again.
BZ#1165269
Previously, When you add a Red Hat Gluster Storage node in the Red Hat Gluster Storage Console using its IP address and remove it from the Red Hat Gluster Storage trusted Storage Pool, and consequently use the FQDN of the node to add it again to the trusted storage pool, the operation fails. With this fix, the node can be added successfully using FQDN even if it was earlier added using IP and removed from the trusted storage pool later.
BZ#1224279
An enhancement has been made to allow users to monitor the state of geo-replication sessions from console. Users are now alerted when new sessions are created or when the session status is faulty.
BZ#1201740
Previously, the Red Hat Storage Console override the Red Hat Enterprise Linux values for the vm.dirty_ratio and dirty_background_ratio to 5 and 2 respectively. This occurred when activating the tuned profile 'rhs-virtualization' while adding Red Hat Storage nodes to the Red Hat Storage Console. This had decreased the performance of the Red Hat Storage Trusted Storage Pool. With this fix, users are given an option to choose the tuned profile during cluster creation. Based on his use case, the user he can choose the profile.
BZ#1212513
Dashboard feature has been added to the Red Hat Gluster Storage Console. The Dashboard displays an overview of all the entities in Red Hat Gluster Storage like Hosts, Volumes, Bricks, and Clusters. The Dashboard shows a consolidated view of the system and helps the administrator to know the status of the system.
BZ#1213255
An enhancement has been made to monitor the volume capacity information from a single pane.
BZ#845191
Enhancements have been made to allow users to provision the bricks with recommended configuration and volume creation from a single interface.
BZ#977355
Previously, when a server was down, the error message that was returned did not contain the server name. As a consequence, identifying the server that is down using this error message was not possible. With this fix, the server is easily identifiable from the error message.
BZ#1032020
Previously, there was no error message displayed if a user tries to stop a volume when the remove-brick operation was in progress. With this fix, "Error while executing action: Cannot stop Gluster Volume. Rebalance operation is running on the volume vol_name in cluster cluster_name" error message is displayed.
BZ#1121055
Red Hat Gluster Storage Console now supports monitoring and measuring the performance of Gluster volumes and bricks from the Console.
BZ#1107576
Previously, Console expected a host to be in operational state before allowing the addition of another host. As a consequence, multiple hosts could not be added together. With this fix, multiple hosts can be added together.
BZ#1061813
Previously, users were unable to see the details of files scanned,moved and failed in the task pane after stopping/committing/retaining the remove brick operation. With this fix, this issue is resolved.
BZ#1229173
Previously, the Reinstall button was not available in the Hosts main tab. The Reinstall button was available only as part of Hosts General Tab and it was difficult for the user to go to 'General' to Reinstall the hosts. With this fix, The Reinstall button is available in the 'Hosts' main tab.

rhsc-sdk

BZ#1054827
Now Gluster volume usage statistics available through REST API. The volume usage details are available under /api/clusters/{id}/glustervolumes/{id}/statistics.

Appendix A. Revision History

Revision History
Revision 3.1-3Thu Oct 1 2015Divya Muntimadugu
Updated Technical Notes for Red Hat Storage 3.1 Update1 GA, RHBA-2015:1845-05 and RHBA-2015:1846-06 .
Revision 3.1-2Tue July 28 2015Ella Deon Ballard
Adding sort_order.
Revision 3.1-1Mon Jul 27 2015Divya Muntimadugu
Version for Red Hat Storage 3.1 GA release.

Legal Notice

Copyright © 2014-2015 Red Hat, Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat