Chapter 7. RHEA-2014:1278
The bugs contained in this chapter are addressed by advisory RHEA-2014:1278. Further information about this advisory is available at https://rhn.redhat.com/errata/RHEA-2014-1278.html.
gluster-afr
- BZ#1097581
- Previously, data loss was observed when one of the bricks in a replica pair goes offline, and a new file is created in the interim before the other brick is back online. When the first brick is available again before a self heal process happens on that directory of the brick and consequently if the second brick goes offline again and new files are created on the first brick, and it crashes at a certain point leaving the directory in a stale state although it has new data. When both the bricks in the replica pair are back online, the newly created data on the first brick is deleted leading to data loss. With this fix, the data loss is not observed.
- BZ#1055707
- Previously,
glusterfs
stored symlinks to each of the directories present on the bricks inbrick-directory/.glusterfs
to access them via glusterfs file ID i.e gfid. Some cases were observed where the symlink went missing for a particular directory and from then on directories were created instead of symlinks for the directories with missing symlinks. With this fix, symlinks is created even in these cases. - BZ#1120245
- Previously, the metadata
self-heal
did not deallocate the memory it allocates and this led to high memory usage of theself-heal
daemon. With this fix, deallocation of memory works as expected, hence metadata self-heal of numerous files does not lead to high memory usage of theself-heal
daemon. - BZ#986317
- An enhancement has been made to the
gluster volume heal volname info
command. With this fix, this command lists only the files or directories that need self-heal.
gluster-dht
- BZ#1116137
- Previously, even if the inode times (
mtime
,ctime
etc) have been reset to past values using thesetattr
command, the values were not reflected in the subsequent metadata (stat) information. With this fix, the inode timestamp values are set with theforce
option in the inode context with thesetattr
command and the inode timestamps are reflected appropriately. - BZ#1090986
- Previously, the directory entries were read only from the subvolume which has been up for the longest time. If a newly created directory was not yet created on the longest up subvolume when a snapshot was taken, the restored snapshot mount point did not list the newly created directory. With this fix, the directory entries are filtered from their corresponding hashed subvolumes. Only in case of a hashed subvolume having a NULL value (either due to a layout anomaly or a hashed volume being offline), the entry is filtered from the subvolume that has been up the longest.
- BZ#1117283
- If a file is not found on its cached subvolume, a lookup operation for the file is sent to all subvolumes. Previously, this operation would identify linkto files as regular files and proceed with file operations on it. With this fix, the linkto file is not identified as a regular file and if it is stale, it will not be linked.
- BZ#1125958
- Previously, some operations would fail if the directory in which they were performed was missing on some bricks in the volume (this could happen if the directory was created when those bricks were down). If a caller bypasses lookup and calls access due to saved/cached inode information (like the NFS server does) then, dht_access fails the operation if an ENOENT error is returned. With this fix, if the directory is not found in one sub-volume, then the information is fetched from the next sub-volume.
- BZ#1121099
- Previously, when the cluster topology changed due to add-brick, all subvolumes of DHT did not contain the directories till a rebalance was completed. With this fix, the problem has been resolved in dht_access thereby preventing DHT from misrepresenting a directory as a file in the case presented above.
gluster-nfs
- BZ#1018894
- In gluster volume set, values for keys
nfs.rpc-auth-allow
andnfs.rpc-auth-reject
now support wildcard characters and IPv4 subnetwork pattern using CIDR format. However, wildcard character and subnetwork pattern must not be mixed. - BZ#1098862
- Previously, the glusterFS NFS server did not validate the unsupported RPC procedure and segmentation faults. With this fix, the system validates the RPC procedures for glusterFS NFS ACL program as a result, a system crash is averted.
- BZ#1116992
- Previously, mounting a volume over NFS (TCP) with
MOUNT
over UDP failed due to a strict verification of memory allocations. Enabling thenfs.mount-udp
did not support NFS Server mount exports over UDP (MOUNT
protocol only, NFS will always use TCP). As a result, when the users tried to use the MOUNT service over UDP, connections timed out and the mount operation failed. With this release, theMOUNT
service works over UDP as expected and supports mounting of complete volumes. However, it does not support sub-directory exports (for example,server:/volume/subdir
).
gluster-quota
- BZ#1092429
- Previously, the
quoatd
process started blocking the epoll thread when glusterd was started. This led to glusterd being deadlocked during startup. As a result, the daemon processes could not start correctly. As a result, two instances of the daemon processes were observed. With this fix,Quoatd
is started separately leaving the epoll thread free to serve other requests. All the daemon processes start properly and display only a single instance of each process. - BZ#1103688
- Previously, the quota limits could not be set or configured as the
root squash
feature blacklisted the glusterd client used to configure the quota limits on a brick. With this fix, theglusterd
client is added to awhite-list
of theroot-squash
exception list. With this fix, quota limit is set without any issue. - BZ#1030432
- Previously, even if the quota limit was not set, quota used to send the
quota-deem-statfs
key to the dictionary resulting in incorrect calculations. With this fix, the value of the size field for the mount point is cumulative of all the bricks and does not lead to incorrect calculation. - BZ#1095267
- Previously, while trying to enable quota again, the system tried to access a NULL transport object leading to a crash. With this fix, a new transport connection is created every time quota is enabled.
- BZ#1111468
- Previously, a dictionary leak was observed while updating the quota cache and this resulted in high memory consumption leading to an out of memory condition when quota was enabled. With this fix, the quota memory consumption is reduced and a leakage is not observed.
- BZ#1020333
- Previously, extended attributes namely
trusted.glusterfs.quota.limit-set
andtrusted.glusterfs.volume-id
are visible from any FUSE mount point on the client machine. With this fix, quota related extended attributes is not visible on FUSE mount on client machine. Hence, a client will not be able to read or write to the extended attributes. - BZ#1026227
- Previously, stopping a volume displayed
Transport end point not connected state
message in the quota auxiliary mount. With this fix, quota auxiliary mount is unmounted after the volume stop command is executed.
gluster-smb
- BZ#1086827
- Previously, entries in
/etc/fstab
directory for glusterFS mounts did not have the_netdev
option. This led to a few systems becoming unresponsive. With this fix, the hook scripts have the_netdev
option defined for glusterFS mounts in/etc/fstab
directory and mount operation is successful. - BZ#1111029
- Previously, when
chgrp
is performed,glfs_chown
fails to change the group as the UID is invalid. Hence,chgrp
operation on any files in CIFS mount fails withPermission denied
error. With this fix, the libgfapi code has been modified to set GID andchgrp
does not fail on a CIFS mount, if the user and group has the required permission to perform the operation. - BZ#1104574
- Previously, disabling the
user.smb
oruser.cifs
options would start the SMB process. With this fix, a SIGHUP signal is sent to reload the configurations if the SMB process is running, else no action is taken. - BZ#1056012
- Previously, when a volume sub directory was exported using Samba in a CTDB setup, the
log.ctdb
file would displayERROR: samba directory sub-dir not available
message even if the users were able to access the share. With this fix, the sub directory of a volume is accessible using windows/Linux clients through CTDB and the errors are not seen in the log file.
gluster-snapshot
- BZ#1124583
- Previously, snapshot bricks are mounted with
rw
,nouuid
mount options. With this fix, the mount options used in the original brick is used. - BZ#1132058
- Previously, if the brick mount options contained
=
, then anything after=
was omitted. For example, mount optionrw,noatime,allocsize=1MiB,noattr2
was parsed asrw,noatime,allocsize
.With this fix, this option works as expected. - BZ#1134316
- Previously, the default value of
open fd limit
was 1024. This was not sufficient and only ~500 bricks could connect toglusterd
with two socket connections for each brick. With this fix, the limit is increased to 65536 andglusterd
connects up to 32768 bricks.
gluster-swift
- BZ#1039569
- Previously, headers
X-Delete-At
andX-Delete-After
were accepted although object expiration feature was not fully implemented, thus leading to confusion. With this fix, theX-Delete-At
andX-Delete-After
headers are not accepted.
glusterfs
- BZ#1098971
- Previously, rebalance was triggered even if the file was deleted and a directory with the same name was created during the interval between readdir and file-mirgation. Since file migration was attempted using a directory inode, this led to the rebalance process to crash. With this fix, it is ensured that file migration is not attempted, if the file obtained during readdir no longer exists. This is done by looking up for the gfid associated with the name of the file. If a different file/directory is created with the same name, it would get a new gfid and hence the lookup would fail. When the lookup fails, migration of the file is skipped.
- BZ#1044646
- Previously, if an user running an application belonged to more than approximately 93 groups, the authentication header in the RPC packets sent from the client to the server exceeded the maximum size. This led to an I/O error and the glusterFS client failed to create the RPC packet and did not send anything to the glusterFS bricks. With this fix, users who belong to more than approximately 93 groups can use Red Hat Storage volumes. When the
server.manage-gids
option is enabled, the glusterFS Native client is not restricted to 32 groups and the group-ownership permissions based on files/directories is handled more transparently as server side ACL checks are applied to all the groups of a user. - BZ#1018383
- The brick processes and QEMU (live migration) use the same range of TCP ports for listening. When live migration fails, retries causes an other port to be used. This caused conflicts and prevented several attempts of live migration to fail. With this fix, a new option
base-port
is introduced in/etc/glusterd/glusterd.vol
file and live migration works and does not need to be retried in order to find a free port. - BZ#1110651
- Previously, the Distributed Hash (DHT) Table Translator expected the individual sub-volumes to return their local space consumption and availability during file creation as part of
min-free-disk
calculation. When thequota-deem-statfs
option is enabled on a volume, the quota translators on each bricks returned the volume-wide space consumption and availability of disk space. This caused DHT to eventually always route all file creations to its first sub-volume, resulting in the incorrect input values it received formin-free-disk
calculation. With this fix, the load of the file creation operation is balanced correctly based on themin-free-disk
criterion. - BZ#1110311
- This issue is hit when two or more rebalance processes are acting on same file. After add-brick, if a file hashes to newly added brick, lookup will fail as the file wouldn't be present. In such cases lookup is performed on all the nodes and if a linkto file is found, it gets deleted assuming it to be a stale one (since the previous lookup on hashed-subvolume failed). If rebalance-1 creates a linkto-file on newly added brick as a part of file migration, this linkto-file will be deleted by rebalance-2 which considers it to be stale. Now, since this file was under migration being copied into hashed-subvolume, we would loose the file. The fix is to add careful checks for determining what is considered as a stale linkto file.
- BZ#1108570
- Previously, when the peer that is probed for was offline and the
peer-probe
orpeer-detach
commands were executed in quick succession, theglusterd
management service would become unresponsive. With this fix, thepeer-probe
andpeer-detach
commands work as expected. - BZ#1094716
- Previously,
glusterd
was not backward compliant with Red Hat Storage 2.1. This lead to peer probe not completing successfully, when probed from a Red Hat Storage 2.1 peer, and lead toglusterd
crashing when peer detach was attempted.With this fix,glusterd
has been fixed to make it backward compliant and peer probes is successful and henceglusterd
does not crash. - BZ#1061580
- Previously, when all the bricks in replica group go down while writes are in progress on that replica group, the mount used to hang some times due to stale structures that were not removed from the list. With this fix, removing of stale structures from the list is added to fix the issue.
- BZ#1046908
- Previously, the
glusterd
management service would not maintain the status of rebalance. As a result, after a node reboot, rebalance processes that were complete would also restart. With this fix, after a node reboot the completed rebalance processes do not restart. - BZ#1098691
- Previously, earlier releases of
nfs-ganesha
forced the administrator to restart thenfs-ganesha
server, if an export was added or removed while nfs-ganesha was already started. With this release, you can add and remove exports without restarting the server. - BZ#1057540
- Previously, when reading network traces that included WRITE procedures, the details were confusing. A WRITE procedure always had a size of 0 bytes. With this fix, the size of the data for a WRITE procedure is set and Wireshark can be used to display the size of the data.
- BZ#1085254
- Previously, warning messages were not logged when quota soft limit was met. With this fix, setting the quota
soft-timeout
andhard-timeout
values to zero ensures logging of warning messages. - BZ#1024459
- Previously, creating a hard link where the source and destination files were in the same directory failed in the first attempt. With this fix, hard link creation is successful in the first attempt.
- BZ#1091986
- A new cluster option,
cluster.op-version
has been introduced which can be used to bump the cluster operating version. The cluster operating version can be bumped using the command# gluster volume set all cluster.op-version OP-VERSION
.Theop-version
will be bumped only if:- all the peers in the cluster support it, and
- the new
op-version
is greater than the current clusterop-version
This set operation will not do any other changes other than changing and saving the clusterop-version
in theglusterd.info
file.This feature is only useful for gluster storage pools that have been upgraded from Red Hat Storage 2.1 to Red Hat Storage 3.0. In such a cluster, the only valid value to the key is 3, theop-version
of RHS-3.0. Hence, setting the optioncluster.op-version
on all volumes will bump up the cluster operating version and allow newer features to be used. - BZ#1108018
- Previously, the glusterFS management service was not backward compatible with the Red Hat Storage 2.1 version. As a result, the peers entered the peer reject state during the rolling upgrade from Red Hat Storage 2.1. With this fix, the glusterFS management service is made backward compatible and the peers no longer enter a
peer reject
state. - BZ#1006809
- Earlier,
mkdir
failures returnedENOENT
for the failures due to parents not being present.DHT-selfheal
considers a brick which returnedENOENT
during lookup, as part of layout assuming that the lookup might be racing with amkdir
. Hence, the newly added brick would be considered as part of directory layout. However, the directory creation itself might have failed because of parents not being present on new brick. Subsequently when a file that is about to be created within that directory hashes to the new brick, it would fail as the parent directory is not present. With this fix, treating parent being absent on a sub-volume (in this case because the directory hierarchy is yet to be constructed on the newly added brick) asESTALE
error (as opposed toENOENT
) and as a result, the newly added brick is not considered as part of the layout of a directory and no new files will be hashed to the newly added brick. - BZ#1080245
- Previously, the directory structure
/quota_limit_dir/subdir
andquota_limit_dir
is set with some limit. Whenquota-deem-statfs
is enabled, the output ofdf /quota_limit_dir
would display quota modified values with respect to thequota_limit_dir
where asdf /quota_limit_dir/subdir
would display the quota modified values with respect to volume root (/).With this fix, any subdirectory within a quota_limit_dir would show the modified values as in the/quota_limit_dir
. It searches for the nearest parent that has quota limit set and modifies the statvfs with respect to the parent's limit value. - BZ#976902
- Previously, peer detach force failed if the peer (to be detached) has bricks as part of a distributed volume. However if the peer holds all of the bricks of that volume and if that peer holds no other bricks, peer detach is successful.
- BZ#1003914
- Previously, when
remove-brick commit
is executedremove-brick start
no warning was displayed and it removes the brick with data loss. With this fix, ifremove-brick commit
is executed rremove-brick start
, an error is displayed,Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Brick 10.70.35.172:/brick0 is not decommissioned. Use start or force option.
- BZ#970686
- Previously, a file could not be unlinked if the hashed subvolume was offline and cached subvolume was online. With this fix, upon unlinking the file, the file on the cached subvolume is deleted and the stale link file on the hashed subvolume is deleted upon lookup with the same name.
- BZ#951488
- Previously,
rebalance-status
command would display the status even if rebalance operation was not running on the volume. This is observed only when remove-brick operation is running on the same volume. With this fix,rebalance-status
would display status only if a rebalance operation is running on the volume.. - BZ#921528
- Previously, the end of hard link migration, the fop used to return
ENOTSUP
for all the cases. Hence, this added to the failure count and theremove-brick
status shows failure for all the files. With this fix, this has now been resolved. - BZ#1058405
- Previously, performance/write-behind xlator did not track changes to the size of the file correctly when "extending writes" beyond a "hole at end of the file" are done. Normal reading from the area which was sparsefied (aka hole), hit the server without write-behind flushing the write with offset after the hole, returned an error (since read was done beyond EOF of the file on server). This region was memory mapped and errors during reading through a memory mapped area would trigger a SIGBUS signal. Applications do not normally handle this signal and crash or exit prematurely. With this fix, the performance/write-behind xlator is improved to track the size of the file. With this it can identify writes beyond a hole at the end of file. If a read is done in the hole, it will flush the write before sending read to server. Since, this write has already extended the file on the server, subsequent read wouldn't fail. Hence, applications do not receive an unexpected error or SIGBUS and function the same on glusterfs-fuse as on other filesystems.
- BZ#1043566
- Previously, on upgrade of glusterfs-server package, existing rpmsave files of hook scripts in
/var/lib/glusterd/hooks/1/
directory would get re-saved with a.rpmsave
suffix appended resulting in multiple rpmsave files. With this fix, the hook scripts are treated as config files of the package glusterfs-server and are saved in a RPM standard way. - BZ#842788
- Previously, order of the volume list changed when
glusterd
is restarted. With this fix, volumes will be listed in the ascending order always.
glusterfs-fuse
- BZ#1086421
- Previously,
mount.glusterfs
did not return standard error codes. Hence, applications mounting Red Hat Storage volumes over the gluster native protocol, expected to receive well known and documented standard error return values. Returning incorrect/non-standard errors causes confusion to the applications mounting the volumes, in case an error occurred. With this fix, applications do not need spacial error handling for mounting Red Hat Storage volumes, the standard error values get recognized and handled correctly.
glusterfs-geo-replication
- BZ#1049014
- Previously, when configuration georep_session_working_dir is added in the geo-replication, when upgraded geo-rep session the config file was not updated so geo-rep was unable to get the value of georep_session_working_dir. This led to Geoo-rep worker to crash. With this fix, Geo-rep upgrade is handled in the code, while running geo-replication if it finds georep_session_working_dir is missing then it upgrades the config file and no worker crashes are observed.
- BZ#1044420
- Previously, when geo-rep worker crashed while geo-rep was trying to handle the signal from a worker thread and due to limitation in python, signals can be handled only in main thread. Hence, geo-rep monitor crashed and syncing does not happen from that node. With this fix, geo-rep worker crash is handled gracefully in the code and if geo-rep worker crashes, geo-rep monitor will crash.
- BZ#1095314
- In Geo replication, working directories for changelog consumption were stored under /var/run/gluster/master/slave-url/brick-hash and now at /var/run/gluster*. Reason: /var/run/gluster* is not picked by sos-report and on reboot content of that Directory might wiped out. Result (if any): Change in location of changelog consumption logs and working directory for Geo-rep changelog consumption.
- BZ#1105323
- Previously, ping was used to check the connectivity of slave, even though ping enable is not required in slave to start geo-rep session. Hence, Geo-rep create failed if ping is disabled in slave. Now with this fix, Geo-rep now checks only ssh connectivity to slave and Geo-rep create does not fail even though ping is disabled by firewall.
- BZ#1101910
- Previously, if the user is created without primary group in mount-broker setup, geo-rep fails to set proper ownership of .ssh and authorized keys. Hence, the mount-broker setup failed and the right permissions for .ssh and authorized keys were set manually. With this fix, this issue has been resolved.
- BZ#1064597
- Previously, when using replicate volume in geo-replication all the bricks participated in syncing data to slave. If bricks are replica pair, one will become active and other one will be passive. If a node goes down, passive brick may become active and vice versa. The switching interval was 60 sec. So even if a node goes down, it was not switching immediately. Hence, this led to delay in syncing data to slave. With this fix, switching time is reduced to 1 sec, so that a passive node immediately becomes active when other node goes down and the delay in syncing is reduced.
- BZ#1030052
- During a Geo-replication session, the gsyncd process restarts when you set use-tarssh, a Geo-replication configuration option to true even if it is already being set.
- BZ#1030393
- Previously, when tar+ssh is used as the sync engine, due to an fd leak, the open descriptor count will cross the max allowed limit and cause the gsync daemon to crash. This led to fix file descriptor leak. With this fix, no geo-rep worker crash is observed.
- BZ#1111577
- Previously, Geo-replication synchronizes files through hybrid crawl after it completes full file system crawl and did not use changelogs during that time. Due to this, deletes and renames happened during that window is not propagated to slave. Hence, slave will have additional files compared to Master.
- BZ#1038994
- Previously, when a Passive node becomes active it collects the old changelogs to process, geo-rep identifies and removes respective changelog file from the list if it is already processed. If list is empty geo-rep worker was crashing since it was unable to process empty list. This led to Geo-rep worker crash. With this fix, Geo-rep handles the empty list of changelog files and no Geo-rep worker crash is observed.
- BZ#1113471
- The Geo-replication does not use xsync crawl for the first time but uses history crawl even when change detector set to xsync.
- BZ#1110672
- Previously while establishing a geo-replication session, the master volume and slave volume sizes were not computed properly and as a result, the geo-replication sessions could not be created. With this fix, the calculation errors are fixed and geo-replication session creation succeeds.
- BZ#1098053
- With this fix, a support for non-root privileged slave volume is added by tweaking the current geo-rep setup process and scripts, without affectiong regular (root-privileged) master-slave sessions.
- BZ#1111587
- When force recursive deletes (rm -rf) command is run on master, the directories were not deleted in all distribute nodes in the backend for slave because of order of entrylocks was leading to deadlock and the slave mounts were hanging. Fixed the ordering issue so that all mounts take the lock in same order to fix the deadlock thus this issue.
- BZ#1058999
- Previously, when the gsyncd.conf for a particular geo-rep session had a missing state-file or pid-file entry, glusterd did not leverage the default template where the information is present.This led to geo-rep status becoming defunct. With this fix, if entries such as
state_file
orpid-file
are missing in thegsyncd.conf
or if thegsyncd.conf
is also missing,glusterd
looks for the missing configs in thegsyncd_template.conf
. - BZ#1104121
- Previously, while setting up mount-broker geo-replication if the entire slave url is not provided, the status shows "Config Corrupted". With this fix, you must provide the entire slave url while setting up mount-broker geo-replication.
glusterfs-server
- BZ#1095686
- Previously, the server quorum framework in glusterd would perform the quorum action (start or stop bricks) unconditionally on a quorum event, even if the new event did not cause the quorum status to change. This could cause bricks which were taken down for maintenance to be started in the middle of maintenance. With this fix, the current and previous quorum status are checked before attempting to start or stop bricks. Bricks are only started or stopped if the quorum status changed. Bricks brought down for maintenance will no longer be started on spurious quorum events.
- BZ#1065862
- Previously, when one or more nodes in the cluster is off line, gluster CLI commands may be hung. In this release, with the introduction of ping-timer for glusterd peer connections, commands would fail if one or more nodes are off line, after ping-timeout seconds. By default, the ping-timeout is configured as 30 secs for glusterd connections.
- BZ#1029444
- Previously we are able to get/set the "trusted.glusterfs.volume-id" extended-attribute from the mountpoint. After the fix xattr 'trusted.glusterfs.volume-id' not show on the mount point and throws permission error when tried to set this xattr.
- BZ#1096614
- An enhancemenet has been added to the eaddir-ahead translator. This is enabled by default on newly created volumes in Red Hat Storage 3.0 which improves the readdir performance for the new volumes.
Note
readdir-ahead is not compatible with RHS-2.1, so new volumes created with RHS-3.0 cannot be used with RHS-2.1 clients until readdir-ahead is disabled. - BZ#1108505
- Previously, the way
quotad
was being started on the new peer when peer probed, lead toglusterd
being deadlocked. Hence, the peer probe command failed. With this fix,quotad
is now started in a non-blocking way during peer probe which no longer blocksquotad
and peer probe is successfully. - BZ#1109150
- Previously, when multiple snapshot operations were performed simultaneously from different nodes in a cluster, the
glusterd
daemon peers gets disconnected by ping-timer. Now with this fix, you must disable the ping-timer by setting theping-timeout
to0
in/etc/glusterfs/glusterd.vol
file and restart gluster daemon service and the peers do not get disconnected by ping-timer. - BZ#1035042
- Previously, entries in
/etc/fstab for glusterfs
mounts did not have_netdev
option. This led to some systems becoming unresponsive. With this fix, the hook scripts have_netdev
option defined for glusterFS mounts in the/etc/fstab
and the mount operation is successful. - BZ#891352
- Red Hat Storage Snapshot is a new feature which has been included in this release. This feature enables you to take snapshot of an online (started) Red Hat Storage volume. This is a crash consistent snapshot of the specified Red Hat Storage volume. During snapshot some of the entry fops is blocked to achieve crash consistency. Snapshot feature is based from thinly provisioned LVM snapshot. Therefore to take a snapshot, all the Red Hat Storage volume bricks must be on an independent thinly provisioned LVM. The resultant snapshot is a read-only Red Hat Storage volume, which can be only mounted via FUSE.
- BZ#1048749
- Previously, a subdirectory mount request was successful even though the host was configured with the
nfs.rpc-auth-reject
option. With this fix, the clients requesting the mount are validated against thenfs.rpc-auth-reject
irrespective of type of mount (either the volume mount or subdirectory mount). As a result, if the host is configured withnfs.rpc-auth-reject
, the mount request from the same host would fail for any type of mount requests. - BZ#1046284
- Previously, while executing
gluster volume remove-brick
without any option, it defaults to force commit which resulted in data loss. With this fix, remove-brick cannot be executed without an explicit option. You must provide the option in the command linevolume remove-brick VOLNAME [replica COUNT] BRICK ... start|stop|status|commit|force
, else the command displays an error. - BZ#969993
- Previously,
gluster volume set help
did not display the configuration options forwhite-behind
performance translator namely:With this fix, the options are displayed with description.- performance.nfs.flush-behind
- performance.nfs.write-behind-window-size
- performance.nfs.strict-o-direct
- performance.nfs.strict-write-ordering
- BZ#1006772
- Previously, if NFS server did not access the NLM port number of the NFS client, then server log displayed
Unable to get NLM port of the client. Is the firewall running on client? OR Are RPC services running (rpcinfo -p)?
instead ofUnable to get NLM port of the client. Is the firewall running on client?
. With this fix, this issue has been resolved. - BZ#1043915
- In this release, two new volume tuning options are introduced in the
gluster volume set volname
command namelyserver.anonuid
andserver.anongid
. These options make it possible to define a UID and GID that is used for anonymous access. These options are defined per volume and theserver.root-squash
option must be enabled with these options. - BZ#1071377
- Previously, if length of the volume name, sub folders is more than 256 characters in the brick path, and brick vol file length is more than 256 characters, error messages were displayed. Now with this fix, more than 256 characters is not allowed.
- BZ#1109795
- Previously, a deadlock in the changelog translator caused the I/O operations to stall and resulted in the file system becoming unresponsive. With this fix, no deadlocks are observed during interruptions in the locked regions.
nfs-ganesha
- BZ#1091921
- Two new commands,
gluster vol set volname nfs-ganesha.host IP
andgluster vol set volname nfs-ganesha.enable ON
are introduced with this fix which enable you to use glusterfs volume set options to export/unexport volumes through nfs-ganesha. - BZ#1104016
- With this release a new option,
Disable_ACL
, is added to nfs-ganesha. This option helps in enabling or disabling ACL. Setting this option totrue
disables ACLs and setting this option tofalse
enables ACLs.