Chapter 4. Known Issues
4.1. Red Hat Gluster Storage Copy linkLink copied to clipboard!
Issues related to glusterd
- BZ#1400092
- Performing add-brick to increase replica count while I/O is going on can lead to data loss.Workaround: Ensure that increasing replica count is done offline, i.e. without clients accessing the volume.
- BZ#1403767
- On a multi node setup where NFS-Ganesha is configured, if the setup has multiple volumes and a node is rebooted at the same time as when volume is stopped, then, once the node comes up the volume status shows that volume is in started state where as it should have been stopped.Workaround: Restarting the glusterd instance on the node where the volume status reflects
startedresolves the issue. - BZ#1417097
- glusterd takes time to initialize if the setup is slow. As a result, by the time /etc/fstab entries are mounted, glusterd on the node is not ready to serve that mount, and the glusterd mount fails. Due to this, shared storage may not get mounted after node reboots.Workaround: If shared storage is not mounted after the node reboots, check if glusterd is up and mount the shared storage volume manually.
- BZ#1425681
- Running volume rebalance/volume profile commands concurrently from all the nodes can cause one of the glusterd instance in a node to hold a volume lock for ever. Due to this, all the further commands on the same volume will fail with
another transaction is in progressorlocking failederror message. This is primarily seen when sosreport is executed on all the nodes at a same time.Workaround: Restart the glusterd instance on the node where the stale lock exists. - BZ#1394138
- If a node is deleted from the NFS-Ganesha HA cluster without performing umount, and then a peer detach of that node is performed, that volume is still accessible in /var/run/gluster/shared_storage/ location even after removing the node in the HA-Cluster.Workaround: After a peer is detached from the cluster, manually unmount the shared storage on that peer.
- BZ#1369420
- AVC denial message is seen on port 61000 when glusterd is (re)started.Workaround: Execute
setsebool -P nis_enabled onand restart glusterd. - BZ#1395989
- The export configurations is not deleted during volume delete and will still exist on shared storage.Workaround: After performing volume delete, remove the file manually from the shared storage:
/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf - BZ#1400816
- glusterd tries to create symlink "ganesh.conf" on every node of trusted storage pool. Symlink creation fails if the nfs-ganesha package is missing.Workaround: Install nfs-ganesha package on all the nodes.
Issues related to gdeploy
- BZ#1406403
- If a VG already exists and if user tries to create another VG with the same name, gdeploy would extend it instead of failing.Workaround: Ensure that a new VG name is used.
- BZ#1417596
- On Red Hat Enterprise Linux 6, PyYAML is not added as a dependency. Due to this, when gdeploy tries to import PyYAML it would exit as the package is not found.Workaround: Install the PyYAML package from the repository.
- BZ#1408926
- Currently the
ssl_enableoption is part of thevolumesection. It is a site wide change. If more than one volume is used in the same configuration (and within the same set of servers) andssl_enableis set in all the volume sections, then the ssl operation steps are performed multiple times. This causes the older volumes to fail to mount. Users will then not be able to set SSL automatically with a single line of configuration.Workaround: If there are more than one volume on a node. Set the variableenable_sslunder one [volume] section and set the keys: 'client.ssl', value: 'on'; 'server.ssl', value: 'on';'auth.ssl-allow', value: <comma separated ssl hosts> - BZ#1418999
- Deletion of a node from NFS Ganesha would fail, as the playbook `hosts' section was not pointed to the correct node.Workaround: The node has to be deleted using the script: /usr/libexec/ganesha/ganesha-ha.sh.
Issues related to Arbiter Volumes
- BZ#1387494
- If the data bricks of the arbiter volume get filled up, further creation of new entries might succeed in the arbiter brick despite failing on the data bricks with ENOSPC and the application (client) itself receiving an error on the mount point. Thus the arbiter bricks might have more entries. Now when an rm -rf is performed from the client, if the readdir (as a part of rm -rf) gets served on the data brick, it might delete only those entries and not the ones present only in the arbiter. When the rmdir on the parent dir of these entries comes, it won't succeed on the arbiter (errors out with ENOTEMPTY), leading to it not being removed from arbiter.Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, we would need to remove the directory and its associated gfid symlink from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too.
- BZ#1388074
- If some of the bricks of a replica or arbiter sub volume go down or get disconnected from the client while performing 'rm -rf', the directories may re-appear on the back end when the bricks come up and self-heal is over. When the user again tries to create a directory with the same name from the mount, it may heal this existing directory into other DHT subvols of the volume.Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, the directory and its associated gfid symlink must be removed from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too.
- BZ#1361518
- If a file create is wound to all bricks, and it succeeds only on arbiter, the application will get a failure. But during self-heal, the file gets created on the data bricks with arbiter marked as source. Since data self-heal can never happen from arbiter, 'heal-info' will list the entries forever.Workaround: If 'gluster vol heal <volname> info` shows the pending heals for a file forever, then check if the issue is the same as mentioned above by
- checking that trusted.afr.volname-client* xattrs are zero on the data bricks
- checking that trusted.afr.volname-client* xattrs is non-zero on the arbiter brick *only* for the data part (first 4 bytes)For example:
#getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol* getfattr: Removing leading '/' from absolute path names trusted.afr.testvol-client-0=0x000000540000000000000000 trusted.afr.testvol-client-1=0x000000540000000000000000
#getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol* getfattr: Removing leading '/' from absolute path names trusted.afr.testvol-client-0=0x000000540000000000000000 trusted.afr.testvol-client-1=0x000000540000000000000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If it is in the above mentioned state, then delete the xattr:
for i in $(getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol*|cut -f1 -d'='); do setfattr -x $i file; done
# for i in $(getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol*|cut -f1 -d'='); do setfattr -x $i file; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Issues related to Distribute (DHT) Translator
- BZ#1118770
- There is no synchronization between mkdir and directory creation as part of self heal. This results in scenarios where rmdir or rename can proceed and remove the directory while mkdir is completed only on some subvolumes of DHT. Post completion of rmdir or rename, mkdir recreates the just removed or renamed directory with same gfid. Due to this, in the case of rename, both source and destination directories with the same gfid are present. In the case of rmdir, the directory can be present on some subvols even after rmdir and it can be healed back. In both cases of rename or rmdir, the directory may not be visible on mount point and hence rm -rf of parent directory will fail with an error "Directory not empty"Workaround: As a workaround, try the following steps:
- If rm -rf <dir> fails with ENOTEMPTY for "dir", check whether "dir" contains any subdirectories on the bricks. If present, then delete them.
- If post rename both the source and destination directories exist with the same gfid, then please contact redhat support for assistance.
- BZ#1260779
- In a distribute-replicate volume, the
getfattr -n replica.split-brain-status <path-to-dir>command on mount-point might report that the directory is not in split-brain even though it is.Workaround: To know the split-brain status of a directory, run the following command:gluster v heal <volname> info split-brain
# gluster v heal <volname> info split-brainCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#862618
- After completion of the rebalance operation, there may be a mismatch in the failure counts reported by the
gluster volume rebalance statusoutput and the rebalance log files. - BZ#1409474
- A bug in the remove-brick code can cause file migration on some files with multiple hardlinks to fail. Files may be left behind on the removed brick. These will not be available on the gluster volume once the remove-brick operation is committed.Workaround: Once the remove-brick operation is complete, check for any files left behind on the removed bricks and copy them to the volume via a mount point.
- BZ#1139183
- The Red Hat Gluster Storage 3.0 version does not prevent clients with older versions from mounting a volume on which rebalance is performed. Users with versions older than Red Hat Gluster Storage 3.0 mounting a volume on which rebalance is performed can lead to data loss.Workaround: You must install latest client version to avoid this issue.
- BZ#1136718
- The AFR self-heal can leave behind a partially healed file if the brick containing AFR self-heal source file goes down in the middle of heal operation. If this partially healed file is migrated before the brick that was down comes online again, the migrated file would have incorrect data and the original file would be deleted.
Issues related to Replication (AFR)
- BZ#1426128
- In a replicate volume, if a gluster volume snapshot is taken when a create is in progress the file may be present in one brick of the replica and not the other on the snapshotted volume. Due to this, when this snapshot is restored and a rm -rf is executed on a directory from the mount, it may fail with ENOTEMPTY.Workaround: If you get an ENOTEMPTY during
rm -rf dir, butlsof the directory shows no entries, check the backend bricks of the replica to verify if files exist on some bricks and not the other. Perform a stat of that file name from the mount so that it is healed to all bricks of the replica. Now when you do `rm -rf dir`, it should succeed.
Issues related to gNFS
- BZ#1413910
- From Red Hat Gluster Storage 3.2 onwards, for every volume the option nfs.disable will be explicitly set to either on or off. The snapshots which were created from 3.1.x or earlier does not have that volume option.Workaround: Execute the following command on the volumes:
gluster v set nfs.disable <volname> off
# gluster v set nfs.disable <volname> offCopy to Clipboard Copied! Toggle word wrap Toggle overflow The restored volume will not be exported via gluster nfs.
Issues related to Tiering
- BZ#1334262
- If the
gluster volume tier attachcommand times out, it could result in either of two situations. Either the volume does not become a tiered volume, or the tier daemon is not started.Workaround: When the timeout is observed, follow these steps:- Check if the volume has become a tiered volume.
- If not, then rerun attach tier.
- If it has, then proceed with the next step.
- Check if the tier daemons were created on each server.
- If the tier daemons were not created, then execute the following command:
gluster volume tier <volname> start
# gluster volume tier <volname> startCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- BZ#1303298
- Listing the entries on a snapshot of a tiered volume shows incorrect permissions for some files. This is because the USS returns the stat information for the linkto files in the cold tier instead of the actual data file and these files appear to have
-----Tpermissions.Workaround: FUSE clients can work around this issue by applying any of the following options:use-readdirp=no(recommended)attribute-timeout=0entry-timeout=0
NFS clients can work around the issue by applying thenoacoption. - BZ#1303045
- When a tier is attached while I/O is occurring on an NFS mount, I/O pauses temporarily, usually for between 3 to 5 minutes. If I/O does not resume within 5 minutes, use the
gluster volume start volname forcecommand to resume I/O without interruption. - BZ#1273741
- Files with hard links are not promoted or demoted on tiered volumes.
- BZ#1305490
- A race condition between tier migration and hard link creation results in the hard link operation failing with a
File existserror, and loggingStale file handlemessages on the client. This does not impact functionality, and file access works as expected.This race occurs when a file is migrated to the cold tier after a hard link has been created on the cold tier, but before a hard link is created to the data on the hot tier. In this situation, the attempt to create a hard link on the hot tier fails. However, because the migration converts the hard link on the cold tier to a data file, and a linkto already exists on the cold tier, the links exist and works as expected. - BZ#1277112
- When hot tier storage is full, write operations such as file creation or new writes to existing files fail with a
No space left on deviceerror, instead of redirecting writes or flushing data to cold tier storage.Workaround: If the hot tier is not completely full, it is possible to work around this issue by waiting for the next CTR promote/demote cycle before continuing with write operations.If the hot tier does fill completely, administrators can copy a file from the hot tier to a safe location, delete the original file from the hot tier, and wait for demotion to free more space on the hot tier before copying the file back. - BZ#1278391
- Migration from the hot tier fails when the hot tier is completely full because there is no space left to set the extended attribute that triggers migration.
- BZ#1283507
- Corrupted files can be identified for promotion and promoted to hot tier storage.In rare circumstances, corruption can be missed by the BitRot scrubber. This can happen in two ways:
- A file is corrupted before its checksum is created, so that the checksum matches the corrupted file, and the BitRot scrubber does not mark the file as corrupted.
- A checksum is created for a healthy file, the file becomes corrupted, and the corrupted file is not compared to its checksum before being identified for promotion and promoted to the hot tier, where a new (corrupted) checksum is created.
When tiering is in use, these unidentified corrupted files can be 'heated' and selected for promotion to the hot tier. If a corrupted file is migrated to the hot tier, and the hot tier is not replicated, the corrupted file cannot be accessed or migrated back to the cold tier. - 1306917
- When a User Serviceable Snapshot is enabled, attaching a tier succeeds, but any I/O operations in progress during the attach tier operation may fail with stale file handle errors.Workaround: Disable User Serviceable Snapshots before performing
attach tier. Onceattach tierhas succeeded, User Serviceable Snapshots can be enabled.
Issues related to Snapshot
- 1403169
- If NFS-ganesha was enabled while taking a snapshot, and during the restore of that snapshot it is disabled or shared storage is down, then the snapshot restore will fail.
- 1403195
- Snapshot create might fail, if a brick has started but not all translators have initialized.
- BZ#1309209
- When a cloned volume is deleted, its brick paths (stored under
/run/gluster/snaps) are not cleaned up correctly. This means that attempting to create a clone that has the same name as a previously cloned and deleted volume fails with a Commit failed message.Workaround: After deleting a cloned volume, ensure that brick entries in/run/gluster/snapsare unmounted and deleted, and that their logical volumes are removed. - BZ#1201820
- When a snapshot is deleted, the corresponding file system object in the User Serviceable Snapshot is also deleted. Any subsequent file system access results in the
snapshotdaemon becoming unresponsive. To avoid this issue, ensure that you do not perform any file system operations on the snapshot that is about to be deleted. - BZ#1160621
- If the current directory is not a part of the snapshot, for example,
snap1, then the user cannot enter the.snaps/snap1directory. - BZ#1169790
- When a volume is down and there is an attempt to access
.snapsdirectory, a negative cache entry is created in the kernel Virtual File System (VFS) cache for the.snapsdirectory. After the volume is brought back online, accessing the.snapsdirectory fails with an ENOENT error because of the negative cache entry.Workaround: Clear the kernel VFS cache by executing the following command:echo 3 > /proc/sys/vm/drop_caches
# echo 3 > /proc/sys/vm/drop_cachesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this can cause temporary performance degradation. - BZ#1174618
- If the User Serviceable Snapshot feature is enabled, and a directory has a pre-existing
.snapsfolder, then accessing that folder can lead to unexpected behavior.Workaround: Rename the pre-existing.snapsfolder with another name. - BZ#1394229
- Performing operations which involve client graph changes such as volume set operations, restoring snapshot, etc. eventually leads to out of memory scenarios for the client processes that mount the volume.
- BZ#1133861
- New snap bricks fails to start if the total snapshot brick count in a node goes beyond 1K. Until this bug is corrected, Red Hat recommends deactivating unused snapshots to avoid hitting the 1K limit.
- BZ#1129675
- Performing a snapshot restore while
glusterdis not available in a cluster node or a node is unavailable results in the following errors:- Executing the
gluster volume heal vol-name infocommand displays the error messageTransport endpoint not connected. - Error occurs when clients try to connect to glusterd service.
Workaround: Perform snapshot restore only if all the nodes and their correspondingglusterdservices are running. Startglusterdby running the following command:service glusterd start
# service glusterd startCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1059158
- The
NFS mountoption is not supported for snapshot volumes. - BZ#1118780
- On restoring a snapshot which was created while the rename of a directory was in progress ( the directory has been renamed on the hashed sub-volume but not on all of the sub-volumes), both the old and new directories will exist and have the same GFID. This can cause inconsistencies and issues accessing files in those directories.In DHT, a rename (source, destination) of a directory is done first on the hashed sub-volume and if successful, on the remaining sub-volumes. At this point in time, both source and destination directories are present in the volume with same GFID - destination on hashed sub-volume and source on rest of the sub-volumes. A parallel lookup (on either source or destination) at this time can result in creation of these directories on the sub-volumes on which they do not yet exist- source directory entry on hashed and destination directory entry on the remaining sub-volumes. Hence, there would be two directory entries - source and destination - having the same GFID.
- BZ#1236149
- If a node/brick is down, the
snapshot createcommand fails even with the force option. - BZ#1240227
- LUKS encryption over LVM is currently not supported.
- BZ#1246183
- User Serviceable Snapshots is not supported on Erasure Coded (EC) volumes.
Issues related to Nagios
- BZ#1327017
- Log messages related to quorum being regained are missed by Nagios server as it is either shutdown or has communication issues with nodes. Due to this, if Cluster Quorum status was critical prior to connection issues, then it continues to remain so.Workaround: Administrator can check the alert from the Nagios UI and once the quorum is regained, the plugin result can be manually changed using "Submit passive check result for this service" option from the service page
- BZ#1136207
- Volume status service shows All bricks are Up message even when some of the bricks are in unknown state due to unavailability of
glusterdservice. - BZ#1109683
- When a volume has a large number of files to heal, the
volume self heal infocommand takes time to return results and the nrpe plug-in times out as the default timeout is 10 seconds.Workaround: In/etc/nagios/gluster/gluster-commands.cfgincrease the timeout of nrpe plug-in to 10 minutes by using the -t option in the command. For example:$USER1$/gluster/check_vol_server.py $ARG1$ $ARG2$ -o self-heal -t 600
$USER1$/gluster/check_vol_server.py $ARG1$ $ARG2$ -o self-heal -t 600Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1094765
- When certain commands invoked by Nagios plug-ins fail, irrelevant outputs are displayed as part of performance data.
- BZ#1107605
- Executing
sadfcommand used by the Nagios plug-ins returns invalid output.Workaround: Delete the datafile located at/var/log/sa/saDDwhere DD is current date. This deletes the datafile for current day and a new datafile is automatically created and which is usable by Nagios plug-in. - BZ#1107577
- The Volume self heal service returns a WARNING when there unsynchronized entries are present in the volume, even though these files may be synchronized during the next run of self-heal process if
self-healis turned on in the volume. - BZ#1121009
- In Nagios, CTDB service is created by default for all the gluster nodes regardless of whether CTDB is enabled on the Red Hat Gluster Storage node or not.
- BZ#1089636
- In the Nagios GUI, incorrect status information is displayed as Cluster Status OK : None of the Volumes are in Critical State, when volumes are utilized beyond critical level.
- BZ#1111828
- In Nagios GUI, Volume Utilization graph displays an error when volume is restored using its snapshot.
Issues related to Rebalancing Volumes
- BZ#1286074
- While Rebalance is in progress, adding a brick to the cluster displays an error message,
failed to get indexin the gluster log file. This message can be safely ignored. - BZ#1286126
- When a node is brought online after rebalance, the status displays that the operation is completed, but the data is not rebalanced. The data on the node is not rebalanced in a remove-brick rebalance operation and running commit command can cause data loss.Workaround: Run the
rebalancecommand again if any node is brought down while rebalance is in progress, and also when the rebalance operation is performed after remove-brick operation.
Issues related to Geo-replication
- BZ#1393362
- If a geo-replication session is created while gluster volume rebalance is in progress, then geo-replication may miss some files/directories sync to slave volume. This is caused because of internal movement of files due to rebalance.Workaround: Do not create a geo-replication session if the master volume rebalance is in progress.
- BZ#1344861
- Geo-replication configuration changes when one or more nodes are down in the Master Cluster. Due to this, the nodes that are down will have the old configuration when the nodes are up.Workaround: Execute the Geo-replication config command again once all nodes are up. With this, all nodes in Master Cluster will have same Geo-replication config options.
- BZ#1293634
- Sync performance for geo-replicated storage is reduced when the master volume is tiered, resulting in slower geo-replication performance on tiered volumes.
- BZ#1302320
- During file promotion, the rebalance operation sets the sticky bit and suid/sgid bit. Normally, it removes these bits when the migration is complete. If readdirp is called on a file before migration completes, these bits are not removed, and remain applied on the client.This means that, if rsync happens while the bits are applied, the bits remain applied to the file as it is synced to the destination, impairing accessibility on the destination. This can happen in any geo-replicated configuration, but the likelihood increases with tiering because the rebalance process is continuous.
- BZ#1102524
- The Geo-replication worker goes to faulty state and restarts when resumed. It works as expected when it is restarted, but takes more time to synchronize compared to resume.
- BZ#1238699
- The Changelog History API expects brick path to remain the same for a session. However, on snapshot restore, brick path is changed. This causes the History API to fail and geo-rep to change to
Faulty.Workaround:
- After the snapshot restore, ensure the master and slave volumes are stopped.
- Backup the
htimedirectory (of master volume).cp -a <brick_htime_path> <backup_path>
cp -a <brick_htime_path> <backup_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Using-aoption is important to preserve extended attributes.For example:cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime /opt/backup_htime/brick0_b0
cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime /opt/backup_htime/brick0_b0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to replace the
OLDpath in the htime file(s) with the new brick path, where OLD_BRICK_PATH is the brick path of the current volume, and NEW_BRICK_PATH is the brick path after snapshot restore.find <new_brick_htime_path> - name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'
find <new_brick_htime_path> - name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:find /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|/bricks/brick0/b0/|/var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/|g'
find /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|/bricks/brick0/b0/|/var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/|g'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the Master and Slave volumes and Geo-replication session on the restored volume. The status should update to
Active.
Issues related to Self-heal
- BZ#1230092
- When you create a replica 3 volume, client quorum is enabled and set to
autoby default. However, it does not get displayed ingluster volume info. - BZ#1240658
- When files are accidentally deleted from a brick in a replica pair in the back-end, and
gluster volume heal VOLNAME fullis run, then there is a chance that the files may not get healed.Workaround: Perform a lookup on the files from the client (mount). This triggers the heal. - BZ#1173519
- If you write to an existing file and go over the
_AVAILABLE_BRICK_SPACE_, the write fails with an I/O error.Workaround: Use thecluster.min-free-diskoption. If you routinely write files up to nGB in size, then you can set min-free-disk to an mGB value greater than n.For example, if your file size is 5GB, which is at the high end of the file size you will be writing, you might consider setting min-free-disk to 8 GB. This ensures that the file will be written to a brick with enough available space (assuming one exists).gluster v set _VOL_NAME_ min-free-disk 8GB
# gluster v set _VOL_NAME_ min-free-disk 8GBCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Issues related to replace-brick operation
- After the
gluster volume replace-brick VOLNAME Brick New-Brick commit forcecommand is executed, the file system operations on that particular volume, which are in transit, fail. - After a replace-brick operation, the stat information is different on the NFS mount and the FUSE mount. This happens due to internal time stamp changes when the
replace-brickoperation is performed.
Issues related to Quota
- BZ#1418227
- If a directory was removed before removing the quota limits previously set on them, then a stale gfid entry corresponding to that directory remains in the quota configuration file. In the case that the last gfid entry in the quota configuration file happens to be stale, the quota list would end up showing blank output.Workaround: The limits can be examined on the individual directories by giving the path in quota list command.To resolve the list issue:
- Take a backup of quota.conf for safety (/var/lib/glusterd/vols/<volname>/quota.conf)
- Remove the last gfid entry which is either 16 bytes or 17 bytes based on quota.conf version.
- Check the quota.conf version by performing a
caton the file. - If the conf version is 1.2, then remove the last 17 bytes. Otherwise remove the last 16 bytes from the conf file.
- Perform the quota list operation and check if all the limits are listed now.
- Delete/Restore the backup file based on whether the above step worked/failed.
Issues related to NFS
- After you restart the NFS server, the unlock within the grace-period feature may fail and the locks help previously may not be reclaimed.
fcntllocking (NFS Lock Manager) does not work over IPv6.- You cannot perform NFS mount on a machine on which glusterfs-NFS process is already running unless you use the NFS mount
-o nolockoption. This is because glusterfs-nfs has already registered NLM port with portmapper. - If the NFS client is behind a NAT (Network Address Translation) router or a firewall, the locking behavior is unpredictable. The current implementation of NLM assumes that Network Address Translation of the client's IP does not happen.
nfs.mount-udpoption is disabled by default. You must enable it to use posix-locks on Solaris when using NFS to mount on a Red Hat Gluster Storage volume.- If you enable the
nfs.mount-udpoption, while mounting a subdirectory (exported using thenfs.export-diroption) on Linux, you must mount using the-o proto=tcpoption. UDP is not supported for subdirectory mounts on the GlusterFS-NFS server. - For NFS Lock Manager to function properly, you must ensure that all of the servers and clients have resolvable hostnames. That is, servers must be able to resolve client names and clients must be able to resolve server hostnames.
Issues related to NFS-Ganesha
- BZ#1451981
- As of Red Hat Gluster Storage 3.2, the NFS Ganesha configuration files
ganesha.confandganesha-ha.confare stored in shared storage (/var/run/gluster/shared_storage). However, it is not possible to ensure that this shared storage is mounted before NFS Ganesha is started. This means that when shared storage is not yet available, NFS Ganesha fails to start. This is corrected in Red Hat Gluster Storage 3.3 but cannot be corrected in Red Hat Gluster Storage 3.2.Workaround: Ensure that NFS Ganesha starts after shared storage is mounted. You can do this by preventing the `nfs-ganesha` service from starting at boot time, and starting the service manually after you have verified that the shared storage is mounted.To disable the service from starting automatically at boot time, run the following command:systemctl disable nfs-ganesha
# systemctl disable nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that shared storage is mounted, run the following command:df -h | grep -i shared
# df -h | grep -i shared server1:/gluster_shared_storage 14G 2.4G 12G 17% /run/gluster/shared_storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the service manually, run the following command:systemctl start nfs-ganesha
# systemctl start nfs-ganeshaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1425504
- The logrorate system is missing options in the configuration file which will enable the deletion of
ganesha.logandganesha-gfapi.log. The absence of configuration options results in the log files being rotated but never being deleted or removed, resulting in the consumption of a lot of space.Workaround: Manually delete the log files to recover the space. - BZ#1425753
- When there are multiple paths with the same parent volume exported via NFS-ganesha server, the handles maintained by the server of the files/directories common to those paths may get merged. Due to this, unexporting one of those shares may result in segmentation fault of the server when accessed via another share mount.Workaround: Unexport such shares in the reverse order of how they were exported. For eg., if the shares are exported in the order as mentioned below:
/testvol /testvol/a /testvol/a/b
/testvol /testvol/a /testvol/a/bCopy to Clipboard Copied! Toggle word wrap Toggle overflow Then unexport those paths in the reverse order i.e,/testvol/a/b /testvol/a /testvol
/testvol/a/b /testvol/a /testvolCopy to Clipboard Copied! Toggle word wrap Toggle overflow The handles merged by the server shall not get freed as long as all the shares accessing them do not get unexported, thus avoiding the crash. - BZ#1426523
- The ganesha.conf file is not cleaned completely during nfs-ganesha disable leading to several stale export entries in the ganesha.conf file. Due to this, enabling nfs-ganesha after this fails to bring up the ganesha process.Workaround: Remove the stale entries manually.
- BZ#1403654
- In a nfs-ganesha cluster, when multiple nodes shutdown/reboot, pacemaker resources may enter FAILED/STOPPED state. This may then affect IP failover/failback behaviour.Workaround: Execute the following command for each such resource which went into FAILED/STOPPED state to restore them back to normal state.
pcs resource cleanup <resource-id>
# pcs resource cleanup <resource-id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1398843
- When a parallel
rm -rffrom multiple nfs clients which has large number of directory hierarchy and files in it is performed, due to client side caching, deletion of certain files results in ESTALE, and the parent directory will not be removed with ENOEMPTY.Workaround: Performrm -rf *again on the mount point. - BZ#1402308
- The Corosync service will crash, if ifdown is performed after setting up the ganesha cluster. This may impact the HA functionality.
- BZ#1330218
- If a volume is being accessed by heterogeneous clients (i.e, both NFSv3 and NFSv4 clients), it is observed that NFSv4 clients take longer time to recover post virtual-IP failover due to a node shutdown.Workaround: Use different VIPs for different access protocol (i.e, NFSv3 or NFSv4) access.
- BZ#1416371
- If
gluster volume stopoperation on a volume exported via NFS-ganesha server fails, there is a probability that the volume will get unexported on few nodes, inspite of the command failure. This will lead to inconsistent state across the NFS-ganesha cluster.Workaround: To restore the cluster back to normal state, perform the following- Identify the nodes where the volume got unexported
- Re-export the volume manually using the following dbus command:
dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:""EXPORT(Path=/<volname>)"""
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:""EXPORT(Path=/<volname>)"""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- BZ#1381416
- When a READDIR is issued on directory which is mutating, the cookie sent as part of request could be of the file already deleted. In such cases, server returns
BAD_COOKIEerror. Due to this, some applications (like bonnie test-suite) which do not handle such errors may error out.This is an expected behaviour of NFS server and the applications has to be fixed to fix such errors. - BZ#1398280
- If any of the PCS resources are in the failed state, then the teardown requires a lot of time to complete. Due to this, the command
gluster nfs-ganesha disablewill timeout.Workaround: Ifgluster nfs-ganesha disableis errored with a timeout, then perform the pcs status and check whether any resource is in failed state. Then perform the cleanup for that resource using following command:pcs resource --cleanup <resource id>
# pcs resource --cleanup <resource id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Re-execute thegluster nfs-ganesha disablecommand. - BZ#1328581
- After removing a file, the nfs-ganesha process does a lookup on the removed entry to update the attributes in case of any links present. Due to this, as the file is deleted, lookup will fail with ENOENT resulting in a misleading log message in gfapi.log.This is an expected behaviour and there is no functionality issue here. The log message needs to be ignored in such cases.
- BZ#1259402
- When vdsmd and abrt are installed alongside each other, vdsmd overwrites abrt core dump configuration in
/proc/sys/kernel/core_pattern. This prevents NFS-Ganesha from generating core dumps.Workaround: Disable core dumps in/etc/vdsm/vdsm.confby settingcore_dump_enabletofalse, and then restart theabrt-ccppservice:systemctl restart abrt-ccpp
# systemctl restart abrt-ccppCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1257548
nfs-ganeshaservice monitor script which triggers IP failover runs periodically every 10 seconds. The ping-timeout of the glusterFS server (after which the locks of the unreachable client gets flushed) is 42 seconds by default. After an IP failover, some locks may not get cleaned by the glusterFS server process, hence reclaiming the lock state by NFS clients may fail.Workaround: It is recommended to set thenfs-ganeshaservice monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec).Hence, either you must decrease the network ping-timeout using the following command:gluster volume set <volname> network.ping-timeout <ping_timeout_value>
# gluster volume set <volname> network.ping-timeout <ping_timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow or increase nfs-service monitor interval time using the following commands:pcs resource op remove nfs-mon monitor
# pcs resource op remove nfs-mon monitorCopy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value>
# pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1226874
- If NFS-Ganesha is started before you set up an HA cluster, there is no way to validate the cluster state and stop NFS-Ganesha if the set up fails. Even if the HA cluster set up fails, the NFS-Ganesha service continues running.Workaround: If HA set up fails, run service nfs-ganesha stop on all nodes in the HA cluster.
- BZ#1228196
- If you have less than three nodes, pacemaker shuts down HA.Workaround: To restore HA, add a third node with
ganesha-ha.sh --add $path-to-config $node $virt-ip. - BZ#1235597
- On the nfs-ganesha server IP,
showmountdoes not display a list of the clients mounting from that host.
Issues related to Object Store
- The GET and PUT commands fail on large files while using Unified File and Object Storage.Workaround: You must set the
node_timeout=60variable in the proxy, container, and the object server configuration files.
Issues related to Red Hat Gluster Storage Volumes
- BZ#1286050
- On a volume, when read and write operations are in progress and simultaneously a rebalance operation is performed followed by a remove-brick operation on that volume, then the
rm -rfcommand fails on a few files. - BZ#1224153
- When a brick process dies, BitD tries to read from the socket used to communicate with the corresponding brick. If it fails, BitD logs the failure to the log file. This results in many messages in the log files, leading to the failure of reading from the socket and an increase in the size of the log file.
- BZ#1224162
- Due to an unhandled race in the RPC interaction layer, brick down notifications may result in corrupted data structures being accessed. This can lead to NULL pointer access and segfault.Workaround: When the
Bitrotdaemon (bitd) crashes (segfault), you can usevolume start VOLNAME forceto restartbitdon the node(s) where it crashed. - BZ#1227672
- A successful scrub of the filesystem (objects) is required to see if a given object is clean or corrupted. When a file gets corrupted and a scrub has not been run on the filesystem, there is a good chance of replicating corrupted objects in cases when the brick holding the good copy was offline when I/O was performed.Workaround: Objects need to be checked on demand for corruption during healing.
- BZ#1241336
- When an Red Hat Gluster Storage node is shut down due to power failure or hardware failure, or when the network interface on a node goes down abruptly, subsequent gluster commands may time out. This happens because the corresponding TCP connection remains in the
ESTABLISHEDstate. You can confirm this by executing the following command:ss -tap state established '( dport = :24007 )' dst IP-addr-of-powered-off-RHGS-nodeWorkaround: Restartglusterdservice on all other nodes. - BZ#1223306
gluster volume heal VOLNAME infoshows stale entries, even after the file is deleted. This happens due to a rare case when the gfid-handle of the file is not deleted.Workaround: On the bricks where the stale entries are present, for example,<gfid:5848899c-b6da-41d0-95f4-64ac85c87d3f>, check if the file'sgfidhandle is not deleted by running the following command and checking whether the file appears in the output, for example,<brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f.find <brick-path>/.glusterfs -type f -links 1
# find <brick-path>/.glusterfs -type f -links 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the file appears in the output of this command, delete the file using the following command.rm <brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f
# rm <brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3fCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Issues related to Samba
- BZ#1419633
- CTDB fails to start on those setups where the real time schedulers have been disabled. One such example is where vdsm is installed.Workaround: Enable real time schedulers by
echo 950000 > /sys/fs/cgroup/cpu,cpuacct/system.slice/cpu.rt_runtime_usand then restart the ctdb service. For more information, refer the cgroup section of Red Hat Enterprise Linux administration guide, https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/System_Administrators_Guide/index.html - BZ#1379444
- Sharing of subdirectories of Gluster volume does not work if shadow_copy2 vfs module is also used. This is because shadow_copy2 checks on local filesystem for path being shared and Gluster volumes are remote filesystems accessed using libgfapi.Workaround: Add
shadow:mountpoint = /in share section ofsmb.confto bypass this check. - BZ#1329718
- Snapshot volumes are read-only. All snapshots are made available as directories inside .snaps. Even though snapshots are read-only the directory attribute of snapshots is same as the directory attribute of root of snapshot volume, which can be read-write. This can lead to confusion, because Windows will assume that the snapshots directory is read-write. Restore previous version option in file properties gives open option. It will open the file from the corresponding snapshot. If opening of the file also create temp files (e.g. Microsoft Word files), the open will fail. This is because temp file creation will fail because snapshot volume is read-only.Workaround: Copy such files to a different location instead of directly opening them.
- BZ#1322672
- When vdsm and abrt's ccpp addon are installed alongside each other, vdsmd overwrites abrt's core dump configuration in /proc/sys/kernel/core_pattern. This prevents Samba from generating core dumps due to SELinux search denial on new coredump location set by vdsmd.Workaround: To workaround this issue, execute the following steps:
- Disable core dumps in /etc/vdsm/vdsm.conf:
core_dump_enable = false
core_dump_enable = falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the abrt-ccpp and smb services:
systemctl restart abrt-ccpp systemctl restart smb
# systemctl restart abrt-ccpp # systemctl restart smbCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- BZ#1300572
- Due to a bug in the Linux CIFS client, SMB2.0+ connections from Linux to Red Hat Gluster Storage currently will not work properly. SMB1 connections from Linux to Red Hat Gluster Storage, and all connections with supported protocols from Windows continue to work.Workaround: If practical, restrict Linux CIFS mounts to SMB version 1. The simplest way to do this is to not specify the
vers mountoption, since the default setting is to use only SMB version 1. If restricting Linux CIFS mounts to SMB1 is not practical, disable asynchronous I/O in Samba by settingaio read sizeto 0 in smb.conf file. Disabling asynchronus I/O may have performance impact on other clients - BZ#1282452
- Attempting to upgrade to ctdb version 4 fails when ctdb2.5-debuginfo is installed, because the ctdb2.5-debuginfo package currently conflicts with the samba-debuginfo package.Workaround: Manually remove the ctdb2.5-debuginfo package before upgrading to ctdb version 4. If necessary, install samba-debuginfo after the upgrade.
- BZ#1164778
- Any changes performed by an administrator in a Gluster volume's share section of
smb.confare replaced with the default Gluster hook scripts settings when the volume is restarted.Workaround: The administrator must perform the changes again on all nodes after the volume restarts.
Issues related to SELinux
- BZ#1256635
- Red Hat Gluster Storage does not currently support SELinux Labeled mounts.On a FUSE mount, SELinux cannot currently distinguish file systems by subtype, and therefore cannot distinguish between different FUSE file systems (BZ#1291606). This means that a client-specific policy for Red Hat Gluster Storage cannot be defined, and SELinux cannot safely translate client-side extended attributes for files tracked by Red Hat Gluster Storage.A workaround is in progress for NFS-Ganesha mounts as part of BZ#1269584. When complete, BZ#1269584 will enable Red Hat Gluster Storage support for NFS version 4.2, including SELinux Labeled support.
- BZ#1291194 , BZ#1292783
- Current SELinux policy prevents ctdb's 49.winbind event script from executing smbcontrol. This can create inconsistent state in winbind, because when a public IP address is moved away from a node, winbind fails to drop connections made through that IP address.
Issues related to Sharding
- BZ#1332861
- Sharding relies on block count difference before and after every write as gotten from the underlying file system and adds that to the existing block count of a sharded file. But XFS' speculative preallocation of blocks causes this accounting to go bad as it may so happen that with speculative preallocation the block count of the shards after the write projected by xfs could be greater than the number of blocks actually written to.Due to this, the block-count of a sharded file might sometimes be projected to be higher than the actual number of blocks consumed on disk. As a result, commands like
du -shmight report higher size than the actual number of physical blocks used by the file.
General issues
- GFID mismatches cause errors
- If files and directories have different GFIDs on different back-ends, the glusterFS client may hang or display errors. Contact Red Hat Support for more information on this issue.
- BZ#1236025
- The time stamp of files and directories changes on snapshot restore, resulting in a failure to read the appropriate change logs.
glusterfind prefails with the following error:historical changelogs not available. Existing glusterfind sessions fail to work after a snapshot restore.Workaround: Gather the necessary information from existing glusterfind sessions, remove the sessions, perform a snapshot restore, and then create new glusterfind sessions. - BZ#1260119
glusterfindcommand must be executed from one node of the cluster. If all the nodes of cluster are not added inknown_hostslist of the command initiated node, thenglusterfind createcommand hangs.Workaround: Add all the hosts in peer including local node toknown_hosts.- BZ#1058032
- While migrating VMs, libvirt changes the ownership of the guest image, unless it detects that the image is on a shared filesystem and the VMs can not access the disk images as the required ownership is not available.Workaround: Before migration, power off the VMs. When migration is complete, restore the ownership of the VM Disk Image (107:107) and start the VMs.
- BZ#1127178
- If a replica brick goes down and comes up when
rm -rfcommand is executed, the operation may fail with the message Directory not empty.Workaround: Retry the operation when there are no pending self-heals.
Issues related to Red Hat Gluster Storage AMI
- BZ#1267209
- The redhat-storage-server package is not installed by default in a Red Hat Gluster Storage Server 3 on Red Hat Enterprise Linux 7 AMI image. package is not installed by default in a Red Hat Gluster Storage Server 3 on Red Hat Enterprise Linux 7 AMI image.Workaround: It is highly recommended to manually install this package using yum.
yum install redhat-storage-server
# yum install redhat-storage-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow The redhat-storage-server package primarily provides the/etc/redhat-storage-releasefile, and sets the environment for the storage node. package primarily provides the/etc/redhat-storage-releasefile, and sets the environment for the storage node.