Chapter 1. RHBA-2015:0682


The bugs contained in this chapter are addressed by advisory RHBA-2015:0682. Further information about this advisory is available at https://rhn.redhat.com/errata/RHBA-2015:0682.html.

gluster-afr

BZ#1156224
Previously, when client-quorum was enabled on the volume and if an operation failed on all the bricks, it always gave the Read-only file system error instead of the actual error message for the failed operation. With this fix, the correct error message is provided.
BZ#1146520
Previously, as AFR's readdirp was not always gathering the entries' attributes from the sub-volume containing the good copy of the entries, the file contents were not properly copied from the snap volume to the actual volume. With this fix, AFR's readdirp gathers the entries' attributes from their respective read children, as long as they hold the good copy of the file/directory.
BZ#1184075
Synchronous three-way replication is now fully supported in Red Hat Storage volumes. Three-way replication yields best results when used in conjunction with JBODs that are configured with RAID-0 virtual disks on individual disks with one physical disk per brick. You can set quorum on three-way replicated volumes to prevent split-brain scenarios. You can create three-way replicated volumes on Amazon Web Servers (AWS).
BZ#1179563
Previously, the self-heal-algorithm with the option set to "full" did not heal sparse files correctly. This was because the AFR self-heal daemon just read from the source and wrote to the sink. If the source file happened to be sparse (VM workloads), we wrote zeros to the corresponding regions of the sink causing it to lose its sparseness. With this fix, if the source file is sparse, and the data read from source and sink are both zeros for that range, we skip writing that range to the sink, thereby retaining the sparseness of the file.

gluster-dht

BZ#1162306
Simultaneous mkdir operations from multiple clients on the same directories, could result in the creation of multiple subdirectories with the same name but different GFIDs on different subvolumes. Due to this only a subset of the files in that subdirectory was visible to the client. This was because, colliding mkdir and lookup operations from different clients on the same directory caused each client to read different layout information for the same directory. With this fix, all the files in the subdirectory are visible to the client.
BZ#1136714
Previously, any hard links to a file that were created while the file was being migrated were lost once the migration was completed. With this fix, the hard links are retained.
BZ#1162573
Previously, certain file permissions were changed after the file was migrated by a rebalance operation. With this fix, the file retains its original permissions even after file migration.

gluster-quota

BZ#1029866
Previously when a quota limit is reached more than 50%, rename of a file/dir failed with 'Disk Quota Exceeded' even within the same directory. Now the rename works fine when the file is renamed under the same branch where quota limit is set. (BZ#1183944, BZ#1167593, BZ#1139104)
BZ#1183920
Previously, when listing quota limits with an xml output, the CLI crashed. With this fix, the issue is now resolved.
BZ#1189792
Previously when quota was enabled, the logs had several assert messages. This was because the marker was trying to resolve the inode_path for an unlinked inode. With this fix, the inode_path is resolved after the inode is linked.
BZ#1023430
Previously when a quota limit is reached, rename of a file/directory failed with Disk Quota Exceeded even within the same directory. Now the rename works fine, the file is renamed under the same branch where quota limit is set.

gluster-snapshot

BZ#1161111
Previously, a non-boolean value would get set for the features.uss option in the volume option table. This caused the failure of subsequent volume set operation as the features.uss option did not contain a valid boolean value. With this fix, the "features.uss" option only accepts boolean values.

glusterfs

BZ#1171847
Previously, as part of the create operations, the new files or directories were exposed to the user even before the permissions were set on the file/directory. Due to this, the users could access the file/directory with root:root permissions. With this fix, there is a delay before exposing the file/directory to the users until all the permissions ans xattrs are set on it.
BZ#959468
Previously when the glusterd service was stopped while it was performing an update to any peer information file present under /var/lib/glusterd/peers, a file with .tmp suffix would be left over. The presence of this file would prevent glusterd to restart successfully. With this fix glusterd restarts as expected.
BZ#1104618
Previously, tar on a gluster directory gave the message file changed as we read it even though there were no updates to file in progress. This was because the AFR's readdirp was not always gathering the entries' attributes from their corresponding read children. With the fix, when you enable the cluster.consistent-metadata option, AFR's readdirp will gather entries' attributes from their respective read children as long as they hold the good copy of the file/directory.
BZ#1182458
Previously, if multiple glusterd synctask transactions on different volumes were run in the background, it would result in a stale cluster lock that blocked further transactions to go through. With this fix, there are no stale lock left over in the cluster if multiple glusterd synctask transactions on different volumes are run in the background

glusterfs-geo-replication

BZ#1186487
Previously, the ssh public keys stored in common_secret.pem.pub that were copied to all the slave cluster nodes were overwritten in the slave node. Due to this, when two geo-replication sessions are established simultaneously, one of the sessions would fail to start because of wrong public keys. With this fix, the master and slave volume is prefixed to the common_secret.pem.pub file which distinguishes between different sessions and as a result correct public keys gets copied to the slave's authorized_keys file even during simultaneous creation of geo-replication sessions.
BZ#1104112
Previously, when a geo-replication session was established with a non-root user in the slave node and if the user/Admin did not remember the user name with which the geo-replication session was established, then the geo-replication session could not be started. This was because the geo-replication user name was not displayed in the status output. With this fix, the user name in the geo-replication status output is displayed and the user/admin will know the user name to which the geo-replication session is established.
BZ#1186705
Previously, few stale linkto files existed when DHT failed to clean these linkto files. Due to this, geo-replication failed to sync those files. With this fix, performing an explicit named lookup during file syncing through geo-replication successfully synchronises the linkto files.
BZ#1198056
Previously, in the existing files, xtime xattr was updated to the current time and as xtime was greater than the upper limit, FS Crawl failed to pick the file for syncing. This was because geo-replication worker start time was considered as the upper limit for FS Crawl. With this fix, the upper limit comparison is removed during FS Crawl and hence FS Crawl will not miss any files.
BZ#1128156
Previously, while creating a geo-replication session the public keys were added to $HOME/.ssh/authorized_keys even though AuthorizedKeys file is configured to other location in /etc/ssh/sshd_config file. Due to this, Geo-replication failed to find the ssh keys and failed to establish session with slave. With this fix, while adding ssh public keys, geo-replication reads the sshd_config file and adds the public keys to correct file and a geo-replication session can be established with a custom SSH location.
BZ#1164906
Previously, geo-replication was not cleaning up processed Changelog files and the inode space would fill the brick. With this fix, Changelog files are archived after processing and hence these files will not consume inodes.
BZ#1172332
Previously, in tar+ssh mode, if the entry operation failed for some reason, it failed with an EPERM on trying to sync data. Due to this, geo-replication failed and that file was not synced anymore. With this fix, retry logic is added in the tar+ssh mode and a virtual setxattr interface is provided to sync the specific files which are not synced. Hence there are lesser chances of failure of entry creation and if the files are missed, those can be synced through the virtual setxattr interface.
BZ#1186707
Previously, the replace/remove brick operation only checked for the presence of a geo-replication session and did not check if the geo-replication session was running. Due to this the replace/remove brick operation failed if a geo-replication session existed. With this fix, ensure to check whether any geo-rep session is running or not and allow replace/remove brick to continue only if geo-replication is stopped.
BZ#1144117
Previously, as the Changelog API were consuming unprocessed Changelogs from the previous run, the Changelogs were replaced in the slave and created empty files/directories. To fix this issue, ensure to cleanup the working directory before Geo-replication Start.

glusterfs-rdma

BZ#1188685
Previously, for tcp,rdma type volumes, the RDMA port details was hidden from all types of volume details, such as volume status, volume details, xml output etc. Due to this, the user could not see the port details of RDMA bricks. To fix this issue, the following changes were made: *A new column for volume status is introduced that will print rdma port for a brick. If the rdma brick is not available the value will be zero. Changed the port colum to tcp port. *In volume details, an extra entry for rdma port is added and the existing port is changed to tcp port. *For xml output, a new tag called "ports", and two sub tags tcp, rdma is created . The old port tag is retained for backward compatibility.
BZ#1186127
Previously, the registration of the buffer was done in the I/O path. To increase the performance, we can now perform a pre registration of iobuf pool when RDMA is getting initialized.
BZ#825748
Previously, log messages reported missing glusterFS RDMA libraries on machines that did not have infiniBand hardware. However, this is not an error and does not prevent glusterd service from functioning normally. On machines that do not have inifiniBand hardware, glusterd service communicates over ethernet. With this update, log level for such messages is changed from error to warning.

glusterfs-server

BZ#1104459
Previously, epoll thread did socket even-handling and the same thread was used for serving the client or processing the response received from the server. Due to this, other requests were in a queue untill the current epoll thread completed its operation. With multi-threaded epoll, events are distributed that improves the performance due the parallel processing of requests/responses received.
BZ#1144015
Previously, gluster was not validating input value for cluster-min-free-disk option. Due to this, gluster was accepting input value as a percentage which was out of range [0-100] and was accepting input value as a size (unit is byte) which was fractional for cluster.min-free-disk option. With this fix, a correct validation function for checking cluster.min-free-disk value is added. gluster now accepts the value that is in range [0-100] for the input value as a percentage and an unsigned integer value for input as a size (unit in byte) for option cluster.min-free-disk.
BZ#1177134
Previously, glusterd did not check server quorum validation for few operation like add-brick, remove-brick, volume set command etc. Due to this, when there was a loss in server quorum, few operations (add-brick, remove-brick, volume set command etc.) passed successfully without checking for server quorum validation. With this fix, the server quorum validation is performed and as a result it will block all operations (except volume set quorum options) and "volume reset all" commands) when there is a loss in server quorom.
BZ#1181051
Previously the cli command logs were dumped into a hidden file named .cmd_log_history. This file must not be hidden. With this change this file has been marked as a non hidden file and renamed to cmd_history.log.
BZ#1132920
Previously, gluster volume set help for server.statedump-path had a wrong description. With this fix, the path description is corrected.
BZ#1181044
Previously there was no mechanism to dump the run time data structure of glusterd process. With this fix, user may take statedump of a given glusterd process at run time using kill -USR1 PID, where pid is the process id of the glusterd instance running on that node.
BZ#1099374
Previously, when a state-dump was taken, the gfid of barriered fop was displayed as 0 in the state-dump file of the brick. This is because the statedump code was not referring to the correct gfid. With this fix statedump code uses the correct gfid. The gfid will not be 0 in the statedump file when barrier is enabled and the user takes a statedump of volume.
BZ#987511
Previously, the gluster pool list output indentation was not proper when the hostname was greater that 8 characters. This issue is now fixed.

gstatus

BZ#1192153
Previously, the gstatus command was unable to identify the local node on Red Hat Enterprise Virtual Machine. This was because the code was whitelisting NICs to use to help identify the local gluster nodes, IP and FQDN. Hence, some configurations would have gluster running on an unknown interface, and prevent the localhost from resolving correctly to match the internal server names used by a brick. With this fix, the external dependency on python-netifaces module is removed and a blacklist is used for NICs, such as tun/tap/lo/virbr, making the resolution of the localhost to a name/ip more reliable. This enables gstatus to more reliably identify ip/names for the hosts as it discovers the trusted pool configuration.

vdsm

BZ#1190692
Previously, the vdsm-tool configure --force did not configure qemu.conf properly and the vdsm service failed to start. This was because the certificates were not available in /etc/pki/vdsm/certs. With this fix, vdsm-tool configure --force works from the first run and the vdsm service will starts as expected.
BZ#1201628
Virtual memory settings in Red Hat Storage is reset to Red Hat Enterprise Linux defaults to improve I/O performance.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.