Chapter 3. RHSA-2015:1495-10
The bugs contained in this chapter are addressed by advisory RHBA-2015:1495-10. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHSA-2015:1495-10.html.
distribution
- BZ#1223238
- The
glusterfs
packages have been upgraded to upstream version 3.7.1, which provides a number of bug fixes and enhancements over the previous version. - BZ#1123346
- With this release of Red Hat Gluster Storage Server, you can install and manage groups of packages through the
groupinstall
feature ofyum
. By using yum groups, system administrators need not manually install related packages individually.
gluster-afr
- BZ#1112512
- Previously, when
replace-brick commit force
operation was performed, there was no indication of pending heals on the replaced-brick. As a result, if operations succeeded on the replaced brick before its healing and the brick was marked as source, there was a potential for data-loss. With this fix, it is ensured that the replaced brick is marked as sink so that it is not considered as a source for healing till it has a copy of the files. - BZ#1223916
- Previously, when rebalance was in progress the brick processes crashed. With this fix, this isuue is resolved.
- BZ#871727
- Previously, when self-heal is triggered by
shd
, it did not update the read-children. Due to this, if the other brick dies then the VMs go into paused state as mount assumes all read-children are down. With this fix, this issue is resolved and it repopulates read-children usinggetxattr
.
gluster-dht
- BZ#1131418
- Previously, when the
gf_defrag_handle_hardlink
function was executed, thesetxattr
was performed on the internal AFR keys too. This lead to AFR aborting the operation with the following error, which resulted in hard link migration failures:operation not supported
operation not supported
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this fix, setxattr is performed only on the required keys. - BZ#1047481
- Previously, the extended attributes set on a file while it was being migrated were not set on the destination file. Once migration is complete, the source file is deleted causing those extended attributes to be lost. With this fix, the extended attributes set on a file while it is being migrated are now set on the destination file as well.
gluster-quota
- BZ#1171896
- Previously, when a child directory was created within parent directory, on which the quota is set, executing the
df
command displayed the size of the entire volume. With this fix, this issue is resolved and executing thedf
command displays the size of the directory. - BZ#1021820
- Previously the
quotad.socket
file existed in the/tmp
folder. With this release, thequotad.socke
t file is moved to/var/run/gluster
. - BZ#1039674
- Previously, when
quotad
was restarted as part of add/remove brick, resulted in'Transport endpoint Not Connected'
error in the I/O path. With this fix, this issue is resolved. - BZ#1034911
- Previously, when setting the quota limit on an invalid path resulted in the following error message that did not clearly indicate that a path relative to the gluster volume is required.
Failed to get trusted.gfid attribute on path /mnt/ch2/quotas. Reason : No such file or directory
Failed to get trusted.gfid attribute on path /mnt/ch2/quotas. Reason : No such file or directory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this fix, the following error message is displayed that is more clear: please enter the path relative to the volume. - BZ#1027693
- Previously, the
features.quota-deem-statfs
volume option wason
even when quota was disabled. With this fix,features.quota-deem-statfs
is turned off when quota is turned off. - BZ#1101270
- Previously, setting a quota limit value between 9223372036854775800 to 9223372036854775807, which was close to the supported value of 9223372036854775807 would fail. With this fix, setting the quota limit value between 0 - 9223372036854775807 is successful.
- BZ#1027710
- Previously, the
features.quota-deem-statfs
volume option wasoff
by default when quota is enabled. With this fix,features.quota-deem-statfs
is turned on by default when quota is enabled. To disable quota-deem-statfs execute the following command:gluster volume set volname quota-deem-statfs off
# gluster volume set volname quota-deem-statfs off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1023416
- Previously, when setting the limit usage to 1B failed. With this fix, the issue is resolved.
- BZ#1103971
- Previously, when a quota limit of 16384PB was set, the
quota list
output forSoft-limit exceeded
andHard-limit exceeded
values was wrongly reported asYes
. With this fix, the supported quota limit range is changed to (0 - 9223372036854775807) and the quota list provides the correct output.
gluster-smb
- BZ#1202456
- Previously, with
case sensitive = no
andpreserve case = yes
options set in/etc/samba/smb.conf
, renaming a file to case in-sensitive match of existing file name would succeed without warning or error. This lead to two files with the same name being shown in a directory and only one of them being accessible. With this fix, this issue is resolved and the user is warned of an existing file of same name.
gluster-snapshot
- BZ#1203159
- A new volume from a snapshot can now be created. To create a writable volume from a snapshot, execute the following command:
gluster snapshot clone clonename snapname
# gluster snapshot clone clonename snapname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The clonename becomes the volname of the newly created volume. - BZ#1181108
- When a snapshot is created, the current time-stamp in GMT is appended to its name. Due to this, the same snapshot name can be used by multiple snapshots. If a user does not want to append the timestamp with the snapshot name, the
no-timestamp
option in the snapshot create command can be used. - BZ#1048122
- Previously, the snapshot delete command had to be executed multiple times to delete more than one snapshot. Two new commands are now introduced that can be used to delete multiple snapshots. To delete all the snapshots present in a system, execute the following command:
gluster snapshot delete all
# gluster snapshot delete all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To delete all the snapshot present in a specified volume, execute the following command:gluster snapshot delete volume volname
# gluster snapshot delete volume volname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
glusterfs
- BZ#1086159
- Previously, the glusterd service crashed when peer-detach command was executed while the snapshot-create command was underway. With this fix, glusterd does not crash on executing the snapshot-create command.
- BZ#1150899
- In Red Hat Storage Gluster 3.1, system administrators can create/configure and use Dispersed Volumes. Dispersed Volumes allow the recovery of the data stored on one or more bricks in case of failure. It requires less storage space when compared to a replicated volume.
- BZ#1120592
- Previously, there was an error while converting replica volume to distribute volume by reducing replica count to one. With this fix, this issue is resolved replica volume can be converted to distribute volume by reducing the replica count to one.
- BZ#1121585
- Previously, when remove-brick operation was performed on a volume and then the remove-brick status was executed to check the status of non-existant bricks on the same volume, it displayed the status for these bricks without checking the validity of these bricks. With this fix, remove-brick status checks if the brick details are valid before displaying the status. If the brick details are invalid the following error is displayed.
Incorrect brick brick_name for volume_name
Incorrect brick brick_name for volume_name
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1238626
- Previously, unsynchronized memory management between threads caused the glusterfs client process to crash when one thread tried to access memory that had already been freed by another thread. With this fix, access to the memory location is now synchronized across threads.
- BZ#1203901
- Previously, the Gluster NFS server failed to process RPC requests because of certain deadlocks in the code. This occured when there were frequent disconnections post the I/O operations from the NFS clients. Due to this, NFS clients or the mount became unresponsive. With this release, this issue is resolved and the NFS clients are responsive.
- BZ#1232174
- In Red Hat Gluster Storage 3.1, system administrators can identify bit rot, i.e. the silent corruption of data in a gluster volume. With BitRot feature enabled, the system administrator can get the details of the files that are corrupt due to hardware failures.
- BZ#826758
- With this release of Red Hat Gluster Storage, system administrators can create tiered volumes (fast and slow tiers) and the data is placed optimally between the tiers. Frequently accessed data are placed on faster tiers (typically on SSDs) and the data that is not accessed frequently is placed on slower disks automatically.
- BZ#1188835
- Previously, gluster command would log messages of
DEBUG
andTRACE
log levels in/var/log/glusterfs/cli.log
. This would cause the log file grow large quickly. With this release, it would log only messages of log levels INFO or higher precedence. This reduces the rate at which the/var/log/glusterfs/cli.log
size grows. - BZ#955967
- Previously output message of the command '
gluster volume rebalance volname start/start force/fix-layout start
' was ambiguous and poorly formatted."volume rebalance: volname: success: Starting rebalance on volume volname has been successful."
"volume rebalance: volname: success: Starting rebalance on volume volname has been successful."
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this fix the output message while executing the rebalance command is more clear:volume rebalance: volname: success: Rebalance on volname has been started Successfully. Use rebalance status command to check status of the rebalance process.
volume rebalance: volname: success: Rebalance on volname has been started Successfully. Use rebalance status command to check status of the rebalance process.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#962570
- Previously Red Hat Gluster Storage did not have a cli command to display a volume option which was set through volume set command. With this release, a cli command is included to display a configured volume option using the following command:
gluster volume get VOLNAME OPTION
# gluster volume get VOLNAME OPTION
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#826768
- With this release of Red Hat Gluster Storage, gluster volumes are enabled for any industry standard backup application. Glusterfind is a utility that provides the list of files that are modified between the previous backup session and the current period. The commands can be executed at regular intervals to retrieve the list. Multiple sessions for the same volume can be present for different use cases. The changes that are recorded are, new file/directories, data/metadata modifications, rename, and deletes.
glusterfs-devel
- BZ#1222785
- Previously, transport related error messages were displayed on the terminal even when qemu-img create command ran successfully. With this release, there are no transport related error messages on the terminal when qemu-img create command is successful.
glusterfs-fuse
- BZ#1122902
- Previously, certain non-English locales caused an issue with string conversions to floating point integers. Due to the conversion failures it resulted in a critical error, which caused the GlusterFS native client failing to mount a volume. With this release, the FUSE daemon now uses US/English as a locale to convert strings to floating point numbers. The non-English locales can now mount Gluster volume with the FUSE client.
glusterfs-geo-replication
- BZ#1210719
- Previously,
Stime
extended attribute used to identify the slave time till the Slave Volume was in sync. Stime was then updated after processing one batch of changelogs. Due to this, if Batch size was large and Geo-replication worker fails before completing one batch, worker had to reprocess all the changelogs again. With this fix, Batch size is limited based on the size ofChangelog
file. hence, when geo-replication worker crashes and restarts geo-replication re-processes only a small number of changelog files. - BZ#1240196
- Previously, a node which was part of a cluster but not of master volume was not ignored in all the validations done during geo-rep pause. Due to this, the geo-rep pause failed when one or more nodes of cluster were not part of master volume. With this release, the nodes that are part of cluster and not part of master volume are ignored in all the validations done during geo-rep pause. Geo-rep pause now works even when one or more nodes in cluster are not part of master volume
- BZ#1179701
- Previously, when a new node was added to Red Hat Gluster Storage node, Historical Changelogs were not available. Due to issue in comparing the xtime, Hybrid crawl missed few files to sync. With this fix,
Xtime compare logic
in Geo-replication is fixed in Hybrid Crawl and it does not miss any files to sync to Slave. - BZ#1064154
- Previously, brick down cases were incorrectly handled and as a result corresponding geo-rep active worker was falling back to xsync mode and the switch from active to passive did not happen. Due to this, file sync did not start until the brick was up and the zero byte xsync file kept getting generated. With this release, shared meta volume is introduced for better handling of brick down scenarios which helps in switching of geo-rep workers properly. Files, now, continue to sync if brick is down from the corresponding geo-rep worker of replica brick and no zero byte xsync files are seen.
- BZ#1063028
- Previously, Geo-replication ignored POSIX ACLs during sync. Due to this, POSIX ACLs were not replicated to Slave Volume from the Master Volume. In this release, an enhancement is made to Geo-replication to sync POSIX ACLs from the Master Volume to the Slave Volume.
- BZ#1064309
- Previously, single status file was maintained per node for all the Geo-replication workers. Due to this, if any one worker goes faulty the node status goes faulty. With this release, separate status file is maintained for each geo-replication workers per node.
- BZ#1222856
- Previously, when DHT could not resolve a GFID or path, it raised an ESTALE error similar to ENOENT error. Due to unhandled ESTALE exception, Geo-rep worker would crash and the tracebacks are printed in the log files. With this release, the ESTALE errors in Geo-rep worker is handled similar to the ENOENT errors and Geo-rep worker does not crash due to this.
- BZ#1056226
- Previously, user set
xattrs
are not synced to the slave as Geo-replication does not process SETXATTR fops in changelog and in the hybrid crawl. With this release this issue is fixed. - BZ#1140183
- Previously, concurrent renames and node reboots resulted in the slave having both the source and the destination of file, with destination being 0 byte sticky file. Due to this, Slave volume contained old data file and new file being zero byte sticky bit file. With this fix, the introduction of shared meta volume to correctly handle brick down scenarios along with enhancements in rename handling resolves this issue.
- BZ#1030256
- Previously, brick down cases were incorrectly handled and as a result, corresponding geo-rep active worker failed back to xsync and was never switching to
Changelog
mode when the brick came back. Due to this, files might fail to sync to slave. With this release, shared meta volume is introduced for better handling of brick down scenarios which helps in switching of geo-rep workers properly. Files, now, continue to sync if brick is down from the corresponding geo-rep worker of replica brick - BZ#1002026
- Previously, when a file was renamed, if the hash of the renamed file falls in different brick than the brick in which file was created, the Changelog of new brick records RENAME. The original brick will then have the CREATE entry in its Changelog file. Since each geo-rep worker(one per brick) syncs data independent of other workers, RENAME got executed before CREATE. With this release, all the changes are processed sequentially by Geo-replication.
- BZ#1003020
- Previously when hard links were being created, in some scenarios, the gsyncd would crash with an invalid argument. After which it would restart and resume the operation normally. With this fix, the possibility of such a crash is drastically reduced.
- BZ#1127581
- Previously, when
changelog
was enabled in a volume, it generated changelog file once in every rollover-time (15 second), irrespective of whether any operation is run on the brick or not. This led to a lot of empty changelogs generated for a brick. With this fix, any empty changelogs are discarded and only the changelogs that has some file I/O operations is maintained. - BZ#1029899
- Previously,
Checkpoint
target time compared incorrectly with stime xattr. Due to this, when Active node went down the Checkpoint status displayed as Invalid. With this fix, Checkpoint status is displayed as N/A if Geo-replication status is not Active.
glusterfs-server
- BZ#1227168
- Previously, glusterd could crash if remove-brick-status command was executed while the remove-brick process was notifying glusterd about data migration completion on the same node. With this release, glusterd doesn't crash independent of when the remove-brick-status command was executed.
- BZ#1213245
- Previously, if peer probe is executed using IPs then volume creation was also done using IPs. With this release peer probe can be done using IP's and volume creation can be done using host name and vice-versa.
- BZ#1102047
- The following new command is introduced to retrieve the current op-version of the Red Hat Gluster Storage node. # gluster volume get volname cluster.op-version
- BZ#1227179
- Previously, when NFS service is disabled on all running Red Hat Gluster Storage volumes, glusterd would try connecting to gluster-nfs process, resulting in repeated connection failure messages in glusterd logs. With this release, there is no repeated connection failure messages in glusterd logs.
- BZ#1212587
- In this release, name resolution and the method used to identify peers has been improved. Previously, GlusterD could not correctly match addresses to peers when a mixture of FQDNs, shortnames and IPs were used, leading to command failures. With this enhancement, GlusterD can match addresses to peers even when using a mixture of address types.
- BZ#1202237
- Previously, in a multi node cluster, if gluster volume status and gluster volume rebalance status are executed from two different nodes concurrently, glusterd daemon could crash. With this fix, this issue is resolved.
- BZ#1230101
- Previously, glusterd crashed when performing a remove brick operation on a replicate volume after shrinking the volume from replica nx3 to nx2 and from nx2 to nx1. This was due to an issue with the subvol count (replica set) calculation. With this fix glusterd does not crash after shrinking the replicate volume from replica nx3 to nx2 and from nx2 to nx1.
- BZ#1211165
- Previously, brick processes had to be restarted for read-only option to take effect on a Red Hat Gluster Storage volume. With this release, read-only option takes effect immediately after setting it on a Red Hat Gluster Storage volume and the brick processes do not require a restart.
- BZ#1211207
- In Red Hat Gluster Storage 3.1, GlusterD uses userspace-rcu to protect the internal peer data structures.
- BZ#1230525
- Previously, in a multi node cluster, if gluster volume status and gluster volume rebalance status are executed from two different nodes concurrently, glusterd daemon could crash. With this fix, this issue is resolved.
- BZ#1212160
- Previously, executing volume-set command continuously could exhaust privileged ports in the system. Subsequent gluster commands could fail with "Connection failed. Please check if gluster daemon is operational" error. With this release, gluster commands do not consume a port for volume-set command and do not fail when run continuously.
- BZ#1223715
- Previously, when the gluster volume status command was executed, glusterd showed the brick pid even when the brick daemon was offline. With this fix, the brick pid is not displayed if the brick pid is offline.
- BZ#1212166
- Previously, GlusterD did not correctly match the addresses to peers when a combination of FQDNs, shortnames, and IPs were used, leading to command failures. With this enhancement, GlusterD is able to match addresses to peers even when using a combination of address types.
- BZ#1212701
- Previously, there was a data loss issue during replace brick operation. In this release, replace-brick operation with data migration support has been deprecated from Gluster. With this fix, replace brick command will support only one commad gluster volume replace-brick VOLNAME SOURCE-BRICK NEW-BRICK {commit force}
- BZ#874745
- With this release of Red Hat Gluster Storage, SELinux is enabled. This enforces mandatory access-control policies for user programs and system services. This limits the privilege of the user programs and system services to the minimum required, thereby reducing or eliminating their ability to cause harm.
nfs-ganesha
- BZ#1224619
- Previously, deleting a node was intentionally made disruptive. It removed the node from the Highly Available (HA) cluster and deleted the virtual IP address (VIP). Due to this, any clients that have NFS mounts on the deleted node(s) experienced I/O errors. With this release, when a node is deleted from the HA cluster, clients must remount using one of remaining valid VIPs. For a less disruptive experience, a fail-over can be initiated by administratively killing the ganesha.nfsd process on a node. The VIP will move to another node and clients will seamlessly switch.
- BZ#1228152
- In this release, the Parallel Network File System (pNFS) is part of the NFS v4.1 protocol that allows compute clients to access storage devices directly and in parallel. The pNFS cluster consists of MDS(Meta-Data-Server) and DS (Data-Server). The client sends all the read/write requests directly to DS and all other operations are handle by the MDS. pNFS support is now available with nfs-ganesha-2.2.1* packages.
- BZ#1226844
- In this release, ACLs are disabled by default as performance degradation and ACL is not fully supported in ganesha community. To enable ACLs, users should change the configuration file.
- BZ#1228153
- Previously, the logs from FSAL_GLUSTER/gfapi were saved in the "
/tmp
" directory. Due to this, the logs would get lost when/tmp
gets cleared. With this fix, nfs-ganesha will now log to/var/log/ganesha-gfapi.log
and troubleshooting is much easier due to the availability of a longer history.
redhat-storage-server
- BZ#1234439
- Previously Red Hat Gluster Storage performed optimization that was one vendor specific to MegaRAID controller. This caused unsupported/wrong settings. With this release, this optimization is removed to support wider hardware RAID controller.
rhs-hadoop
- BZ#1093838
- Previously, directory with many small files were lining up files in the listing by brick. As a consequence, there was a decrease in performance of Hadoop jobs as the files were processed in the order of the listing. The job was focusing on a single brick at a time. With this fix, the files are sorted by directory listing and not by brick to enhance the performance.
rhs-hadoop-install
- BZ#1062401
- The previous HTB version of the scripts have been significantly rewritten to enhance the modularity and supportability. With some basic understanding of shell command syntax, you can use the auxiliary supporting scripts available at
bin/add_dirs.sh
andbin/gen_dirs.sh
directories. - BZ#1205886
- Previously, in a cluster, if few nodes had similar names, some of the nodes could be inadvertently skipped. With this fix, all the nodes are processed regardless of naming similarities.
- BZ#1217852
- Previously, the hdp 2.1 stack was hard-coded and hence only the hdp 2.1 stack was visible. With this fix, all
glusterfs enabled
hdp stacks will be visible in the Ambari installation wizard. - BZ#1221344
- Previously, users in the
hadoop
group were unable to write to hive directory. With this fix these users can now write to hive directory. - BZ#1209222
- Previously, setting entry-timeout=0 eliminated some caching and decreased the performance. But this was the only setting which worked due to a bug in the vfs kernel. With this fix, and the fact that the vfs bug has also been fixed, eliminating setting of the entry-timeout and attribute-timeout options (and thus using their default values) provides better performance.
- BZ#1162181
- Previsouly, usage of
https
for Ambari was not supported. As a consequence,enable_vol.sh
anddisable_vol.sh failed
. With this fix, the user can chose to use either http or https with Ambari and the scripts automatically detect this.
vulnerability
- BZ#1150461
- A flaw was found in the metadata constraints in OpenStack Object Storage (swift). By adding metadata in several separate calls, a malicious user could bypass the max_meta_count constraint, and store more metadata than allowed by the configuration.