Chapter 4. Known Issues
4.1. Red Hat Gluster Storage
Issues related to FUSE
- BZ#1508999
- When subdir is mounted on GlusterFS client, it cannot perform self-heal operation right after add-brick. rm -rf * or other modification operation fails as well.Workaround: After ‘add-brick’, execute ‘stat’ command on exported directories on the volume mount point (on one of the node) so that subdirs exported are healed.Alternatively, to make subdirs work normally, run ‘gluster volume rebalance’ command. This will take more time than ‘stat’ command on exported directories.
Issues related to glusterd
- BZ#1400092
- Performing add-brick to increase replica count while I/O is going on can lead to data loss.Workaround: Ensure that increasing replica count is done offline, i.e. without clients accessing the volume.
- BZ#1403767
- On a multi node setup where NFS-Ganesha is configured, if the setup has multiple volumes and a node is rebooted at the same time as when volume is stopped, then, once the node comes up the volume status shows that volume is in started state where as it should have been stopped.Workaround: Restarting the glusterd instance on the node where the volume status reflects
started
resolves the issue. - BZ#1417097
- glusterd takes time to initialize if the setup is slow. As a result, by the time
/etc/fstab
entries are mounted, glusterd on the node is not ready to serve that mount, and the glusterd mount fails. Due to this, shared storage may not get mounted after node reboots.Workaround: If shared storage is not mounted after the node reboots, check if glusterd is up and mount the shared storage volume manually. - BZ#1425681
- Running volume rebalance/volume profile commands concurrently from all the nodes can cause one of the glusterd instance in a node to hold a volume lock for ever. Due to this, all the further commands on the same volume will fail with
another transaction is in progress
orlocking failed
error message. This is primarily seen when sosreport is executed on all the nodes at a same time.Workaround: Restart the glusterd instance on the node where the stale lock exists. - BZ#1394138
- If a node is deleted from the NFS-Ganesha HA cluster without performing umount, and then a peer detach of that node is performed, that volume is still accessible in
/var/run/gluster/shared_storage/
location even after removing the node in the HA-Cluster.Workaround: After a peer is detached from the cluster, manually unmount the shared storage on that peer. - BZ#1369420
- AVC denial message is seen on port 61000 when glusterd is (re)started.Workaround: Execute
setsebool -P nis_enabled on
and restart glusterd.
Issues related to gdeploy
- BZ#1408926
- Currently the
ssl_enable
option is part of thevolume
section. It is a site wide change. If more than one volume is used in the same configuration (and within the same set of servers) andssl_enable
is set in all the volume sections, then the ssl operation steps are performed multiple times. This causes the older volumes to fail to mount. Users will then not be able to set SSL automatically with a single line of configuration.Workaround: If there are more than one volume on a node. Set the variableenable_ssl
under one [volume] section and set the keys: 'client.ssl
', value: 'on'; 'server.ssl
', value: 'on';'auth.ssl-allow', value: <comma separated ssl hosts>
Issues related to Arbiter Volumes
- BZ#1387494
- If the data bricks of the arbiter volume get filled up, further creation of new entries might succeed in the arbiter brick despite failing on the data bricks with ENOSPC and the application (client) itself receiving an error on the mount point. Thus the arbiter bricks might have more entries. Now, when an
rm -rf
is performed from the client, if thereaddir
(as a part ofrm -rf
) gets served on the data brick, it might delete only those entries and not the ones present only in the arbiter. When thermdir
on the parent dir of these entries comes, it won't succeed on the arbiter (errors out with ENOTEMPTY), leading to it not being removed from arbiter.Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, we would need to remove the directory and its associated gfid symlink from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too. - BZ#1388074
- If some of the bricks of a replica or arbiter sub volume go down or get disconnected from the client while performing
rm -rf
, the directories may re-appear on the back end when the bricks come up and self-heal is over. When the user again tries to create a directory with the same name from the mount, it may heal this existing directory into other DHT subvols of the volume.Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, the directory and its associated gfid symlink must be removed from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too. - BZ#1361518
- If a file create is wound to all bricks, and it succeeds only on arbiter, the application will get a failure. But during self-heal, the file gets created on the data bricks with arbiter marked as source. Since data self-heal can never happen from arbiter, ''heal-info'' will list the entries forever.Workaround: If 'gluster vol heal <volname> info` shows the pending heals for a file forever, then check if the issue is the same as mentioned above by
- checking that trusted.afr.volname-client* xattrs are zero on the data bricks
- checking that trusted.afr.volname-client* xattrs is non-zero on the arbiter brick *only* for the data part (first 4 bytes)For example:
#getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol* getfattr: Removing leading '/' from absolute path names trusted.afr.testvol-client-0=0x000000540000000000000000 trusted.afr.testvol-client-1=0x000000540000000000000000
- If it is in the above mentioned state, then delete the xattr:
# for i in $(getfattr -d -m . -e hex /bricks/arbiterbrick/file |grep trusted.afr.testvol*|cut -f1 -d'='); do setfattr -x $i file; done
Issues related to Distribute (DHT) Translator
- BZ#1118770
- There is no synchronization between
mkdir
and directory creation as part of self heal. This results in scenarios wherermdir
or rename can proceed and remove the directory while mkdir is completed only on some subvolumes of DHT. Post completion ofrmdir
orrename
,mkdir
recreates the just removed or renamed directory with same gfid. Due to this, in the case of rename, both source and destination directories with the same gfid are present. In the case ofrmdir
, the directory can be present on some subvols even afterrmdir
and it can be healed back. In both cases of rename or rmdir, the directory may not be visible on mount point and hencerm -rf
of parent directory will fail with an error "Directory not empty"Workaround: As a workaround, follow the following steps:- If
rm -rf dir
fails with ENOTEMPTY for dir, check whether dir contains any sub directories on the bricks. If present, then delete them. - If post rename both the source and destination directories exist with the same gfid, then please contact redhat support for assistance.
- BZ#1136718
- The AFR self-heal can leave behind a partially healed file if the brick containing AFR self-heal source file goes down in the middle of heal operation. If this partially healed file is migrated before the brick that was down comes online again, the migrated file would have incorrect data and the original file would be deleted.
Issues related to Replication (AFR)
- BZ#1426128
- In a replicate volume, if a gluster volume snapshot is taken when a create is in progress the file may be present in one brick of the replica and not the other on the snapshotted volume. Due to this, when this snapshot is restored and a rm -rf is executed on a directory from the mount, it may fail with ENOTEMPTY.Workaround: If you get an ENOTEMPTY during
rm -rf dir
, butls
of the directory shows no entries, check the backend bricks of the replica to verify if files exist on some bricks and not the other. Perform a stat of that file name from the mount so that it is healed to all bricks of the replica. Now when you do `rm -rf dir
`, it should succeed.
Issues related to gNFS
- BZ#1413910
- From Red Hat Gluster Storage 3.2 onwards, for every volume the option nfs.disable will be explicitly set to either on or off. The snapshots which were created from 3.1.x or earlier does not have that volume option.Workaround: Execute the following command on the volumes:
# gluster v set nfs.disable <volname> off
The restored volume will not be exported via gluster nfs.
Issues related to Tiering
- BZ#1334262
- If the
gluster volume tier attach
command times out, it could result in either of two situations. Either the volume does not become a tiered volume, or the tier daemon is not started.Workaround: When the timeout is observed, follow these steps:- Check if the volume has become a tiered volume.
- If not, then rerun attach tier.
- If it has, then proceed with the next step.
- Check if the tier daemons were created on each server.
- If the tier daemons were not created, then execute the following command:
# gluster volume tier <volname> start
- BZ#1303298
- Listing the entries on a snapshot of a tiered volume shows incorrect permissions for some files. This is because the USS returns the stat information for the linkto files in the cold tier instead of the actual data file and these files appear to have
-----T
permissions.Workaround: FUSE clients can work around this issue by applying any of the following options:use-readdirp=no
(recommended)attribute-timeout=0
entry-timeout=0
NFS clients can work around the issue by applying thenoac
option. - BZ#1303045
- When a tier is attached while I/O is occurring on an NFS mount, I/O pauses temporarily, usually for between 3 to 5 minutes. If I/O does not resume within 5 minutes, use the
gluster volume start volname force
command to resume I/O without interruption. - BZ#1273741
- Files with hard links are not promoted or demoted on tiered volumes.
- BZ#1305490
- A race condition between tier migration and hard link creation results in the hard link operation failing with a
File exists
error, and loggingStale file handle
messages on the client. This does not impact functionality, and file access works as expected.This race occurs when a file is migrated to the cold tier after a hard link has been created on the cold tier, but before a hard link is created to the data on the hot tier. In this situation, the attempt to create a hard link on the hot tier fails. However, because the migration converts the hard link on the cold tier to a data file, and a linkto already exists on the cold tier, the links exist and works as expected. - BZ#1277112
- When hot tier storage is full, write operations such as file creation or new writes to existing files fail with a
No space left on device
error, instead of redirecting writes or flushing data to cold tier storage.Workaround: If the hot tier is not completely full, it is possible to work around this issue by waiting for the next CTR promote/demote cycle before continuing with write operations.If the hot tier does fill completely, administrators can copy a file from the hot tier to a safe location, delete the original file from the hot tier, and wait for demotion to free more space on the hot tier before copying the file back. - BZ#1278391
- Migration from the hot tier fails when the hot tier is completely full because there is no space left to set the extended attribute that triggers migration.
- BZ#1283507
- Corrupted files can be identified for promotion and promoted to hot tier storage.In rare circumstances, corruption can be missed by the BitRot scrubber. This can happen in two ways:
- A file is corrupted before its checksum is created, so that the checksum matches the corrupted file, and the BitRot scrubber does not mark the file as corrupted.
- A checksum is created for a healthy file, the file becomes corrupted, and the corrupted file is not compared to its checksum before being identified for promotion and promoted to the hot tier, where a new (corrupted) checksum is created.
When tiering is in use, these unidentified corrupted files can be 'heated' and selected for promotion to the hot tier. If a corrupted file is migrated to the hot tier, and the hot tier is not replicated, the corrupted file cannot be accessed or migrated back to the cold tier. - BZ#1306917
- When a User Serviceable Snapshot is enabled, attaching a tier succeeds, but any I/O operations in progress during the attach tier operation may fail with stale file handle errors.Workaround: Disable User Serviceable Snapshots before performing
attach tier
. Onceattach tier
has succeeded, User Serviceable Snapshots can be enabled.
Issues related to Snapshot
- BZ#1403169
- If NFS-ganesha was enabled while taking a snapshot, and during the restore of that snapshot it is disabled or shared storage is down, then the snapshot restore will fail.
- BZ#1403195
- Snapshot create might fail, if a brick has started but not all translators have initialized.
- BZ#1201820
- When a snapshot is deleted, the corresponding file system object in the User Serviceable Snapshot is also deleted. Any subsequent file system access results in the
snapshot
daemon becoming unresponsive. To avoid this issue, ensure that you do not perform any file system operations on the snapshot that is about to be deleted. - BZ#1169790
- When a volume is down and there is an attempt to access
.snaps
directory, a negative cache entry is created in the kernel Virtual File System (VFS) cache for the.snaps
directory. After the volume is brought back online, accessing the.snaps
directory fails with an ENOENT error because of the negative cache entry.Workaround: Clear the kernel VFS cache by executing the following command:# echo 3 > /proc/sys/vm/drop_caches
Note that this can cause temporary performance degradation. - BZ#1174618
- If the User Serviceable Snapshot feature is enabled, and a directory has a pre-existing
.snaps
folder, then accessing that folder can lead to unexpected behavior.Workaround: Rename the pre-existing.snaps
folder with another name. - BZ#1394229
- Performing operations which involve client graph changes such as volume set operations, restoring snapshot, etc. eventually leads to out of memory scenarios for the client processes that mount the volume.
- BZ#1133861
- New snap bricks fails to start if the total snapshot brick count in a node goes beyond 1K. Until this bug is corrected, Red Hat recommends deactivating unused snapshots to avoid hitting the 1K limit.
- BZ#1129675
- Performing a snapshot restore while
glusterd
is not available in a cluster node or a node is unavailable results in the following errors:- Executing the
gluster volume heal vol-name info
command displays the error messageTransport endpoint not connected
. - Error occurs when clients try to connect to glusterd service.
Workaround: Perform snapshot restore only if all the nodes and their correspondingglusterd
services are running. Startglusterd
by running the following command:# service glusterd start
- BZ#1118780
- On restoring a snapshot which was created while the rename of a directory was in progress ( the directory has been renamed on the hashed sub-volume but not on all of the sub-volumes), both the old and new directories will exist and have the same GFID. This can cause inconsistencies and issues accessing files in those directories.In DHT, a rename (source, destination) of a directory is done first on the hashed sub-volume and if successful, on the remaining sub-volumes. At this point in time, both source and destination directories are present in the volume with same GFID - destination on hashed sub-volume and source on rest of the sub-volumes. A parallel lookup (on either source or destination) at this time can result in creation of these directories on the sub-volumes on which they do not yet exist- source directory entry on hashed and destination directory entry on the remaining sub-volumes. Hence, there would be two directory entries - source and destination - having the same GFID.
- BZ#1236149
- If a node/brick is down, the
snapshot create
command fails even with the force option. - BZ#1240227
- LUKS encryption over LVM is currently not supported.
- BZ#1246183
- User Serviceable Snapshots is not supported on Erasure Coded (EC) volumes.
Issues related to Nagios
- BZ#1327017
- Log messages related to quorum being regained are missed by Nagios server as it is either shutdown or has communication issues with nodes. Due to this, if Cluster Quorum status was critical prior to connection issues, then it continues to remain so.Workaround: Administrator can check the alert from the Nagios UI and once the quorum is regained, the plugin result can be manually changed using "Submit passive check result for this service" option from the service page
- BZ#1136207
- Volume status service shows All bricks are Up message even when some of the bricks are in unknown state due to unavailability of
glusterd
service. - BZ#1109683
- When a volume has a large number of files to heal, the
volume self heal info
command takes time to return results and the nrpe plug-in times out as the default timeout is 10 seconds.Workaround: In/etc/nagios/gluster/gluster-commands.cfg
increase the timeout of nrpe plug-in to 10 minutes by using the -t option in the command. For example:$USER1$/gluster/check_vol_server.py $ARG1$ $ARG2$ -o self-heal -t 600
- BZ#1094765
- When certain commands invoked by Nagios plug-ins fail, irrelevant outputs are displayed as part of performance data.
- BZ#1107605
- Executing
sadf
command used by the Nagios plug-ins returns invalid output.Workaround: Delete the datafile located at/var/log/sa/saDD
where DD is current date. This deletes the datafile for current day and a new datafile is automatically created and which is usable by Nagios plug-in. - BZ#1107577
- The Volume self heal service returns a WARNING when there unsynchronized entries are present in the volume, even though these files may be synchronized during the next run of self-heal process if
self-heal
is turned on in the volume. - BZ#1121009
- In Nagios, CTDB service is created by default for all the gluster nodes regardless of whether CTDB is enabled on the Red Hat Gluster Storage node or not.
- BZ#1089636
- In the Nagios GUI, incorrect status information is displayed as Cluster Status OK : None of the Volumes are in Critical State, when volumes are utilized beyond critical level.
- BZ#1111828
- In Nagios GUI, Volume Utilization graph displays an error when volume is restored using its snapshot.
Issues related to Rebalancing Volumes
- BZ#1286074
- While Rebalance is in progress, adding a brick to the cluster displays an error message,
failed to get index
in the gluster log file. This message can be safely ignored.
Issues related to Geo-replication
- BZ#1393362
- If a geo-replication session is created while gluster volume rebalance is in progress, then geo-replication may miss some files/directories sync to slave volume. This is caused because of internal movement of files due to rebalance.Workaround: Do not create a geo-replication session if the master volume rebalance is in progress.
- BZ#1344861
- Geo-replication configuration changes when one or more nodes are down in the Master Cluster. Due to this, the nodes that are down will have the old configuration when the nodes are up.Workaround: Execute the Geo-replication config command again once all nodes are up. With this, all nodes in Master Cluster will have same Geo-replication config options.
- BZ#1293634
- Sync performance for geo-replicated storage is reduced when the master volume is tiered, resulting in slower geo-replication performance on tiered volumes.
- BZ#1302320
- During file promotion, the rebalance operation sets the sticky bit and suid/sgid bit. Normally, it removes these bits when the migration is complete. If readdirp is called on a file before migration completes, these bits are not removed, and remain applied on the client.This means that, if rsync happens while the bits are applied, the bits remain applied to the file as it is synced to the destination, impairing accessibility on the destination. This can happen in any geo-replicated configuration, but the likelihood increases with tiering because the rebalance process is continuous.
- BZ#1102524
- The Geo-replication worker goes to faulty state and restarts when resumed. It works as expected when it is restarted, but takes more time to synchronize compared to resume.
- BZ#1238699
- The Changelog History API expects brick path to remain the same for a session. However, on snapshot restore, brick path is changed. This causes the History API to fail and geo-rep to change to
Faulty
.Workaround:
- After the snapshot restore, ensure the master and slave volumes are stopped.
- Backup the
htime
directory (of master volume).cp -a <brick_htime_path> <backup_path>
Note
Using-a
option is important to preserve extended attributes.For example:cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime /opt/backup_htime/brick0_b0
- Run the following command to replace the
OLD
path in the htime file(s) with the new brick path, where OLD_BRICK_PATH is the brick path of the current volume, and NEW_BRICK_PATH is the brick path after snapshot restore.find <new_brick_htime_path> - name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'
For example:find /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|/bricks/brick0/b0/|/var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/|g'
- Start the Master and Slave volumes and Geo-replication session on the restored volume. The status should update to
Active
.
Issues related to Self-heal
- BZ#1230092
- When you create a replica 3 volume, client quorum is enabled and set to
auto
by default. However, it does not get displayed ingluster volume info
. - BZ#1240658
- When files are accidentally deleted from a brick in a replica pair in the back-end, and
gluster volume heal VOLNAME full
is run, then there is a chance that the files may not get healed.Workaround: Perform a lookup on the files from the client (mount). This triggers the heal. - BZ#1173519
- If you write to an existing file and go over the
_AVAILABLE_BRICK_SPACE_
, the write fails with an I/O error.Workaround: Use thecluster.min-free-disk
option. If you routinely write files up to nGB in size, then you can set min-free-disk to an mGB value greater than n.For example, if your file size is 5GB, which is at the high end of the file size you will be writing, you might consider setting min-free-disk to 8 GB. This ensures that the file will be written to a brick with enough available space (assuming one exists).# gluster v set _VOL_NAME_ min-free-disk 8GB
Issues related to replace-brick operation
- After the
gluster volume replace-brick VOLNAME Brick New-Brick commit force
command is executed, the file system operations on that particular volume, which are in transit, fail. - After a replace-brick operation, the stat information is different on the NFS mount and the FUSE mount. This happens due to internal time stamp changes when the
replace-brick
operation is performed.
Issues related to NFS
- After you restart the NFS server, the unlock within the grace-period feature may fail and the locks help previously may not be reclaimed.
fcntl
locking (NFS Lock Manager) does not work over IPv6.- You cannot perform NFS mount on a machine on which glusterfs-NFS process is already running unless you use the NFS mount
-o nolock
option. This is because glusterfs-nfs has already registered NLM port with portmapper. - If the NFS client is behind a NAT (Network Address Translation) router or a firewall, the locking behavior is unpredictable. The current implementation of NLM assumes that Network Address Translation of the client's IP does not happen.
nfs.mount-udp
option is disabled by default. You must enable it to use posix-locks on Solaris when using NFS to mount on a Red Hat Gluster Storage volume.- If you enable the
nfs.mount-udp
option, while mounting a subdirectory (exported using thenfs.export-dir
option) on Linux, you must mount using the-o proto=tcp
option. UDP is not supported for subdirectory mounts on the GlusterFS-NFS server. - For NFS Lock Manager to function properly, you must ensure that all of the servers and clients have resolvable hostnames. That is, servers must be able to resolve client names and clients must be able to resolve server hostnames.
Issues related to NFS-Ganesha
- BZ#1402308
- The Corosync service will crash, if ifdown is performed after setting up the ganesha cluster. This may impact the HA functionality.
- BZ#1330218
- If a volume is being accessed by heterogeneous clients (i.e, both NFSv3 and NFSv4 clients), it is observed that NFSv4 clients take longer time to recover post virtual-IP failover due to a node shutdown.Workaround: Use different VIPs for different access protocol (i.e, NFSv3 or NFSv4) access.
- BZ#1416371
- If
gluster volume stop
operation on a volume exported via NFS-ganesha server fails, there is a probability that the volume will get unexported on few nodes, inspite of the command failure. This will lead to inconsistent state across the NFS-ganesha cluster.Workaround: To restore the cluster back to normal state, perform the following- Identify the nodes where the volume got unexported
- Re-export the volume manually using the following dbus command:
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<volname>.conf string:""EXPORT(Path=/<volname>)"""
- BZ#1381416
- When a READDIR is issued on directory which is mutating, the cookie sent as part of request could be of the file already deleted. In such cases, server returns
BAD_COOKIE
error. Due to this, some applications (like bonnie test-suite) which do not handle such errors may error out.This is an expected behaviour of NFS server and the applications has to be fixed to fix such errors. - BZ#1398280
- If any of the PCS resources are in the failed state, then the teardown requires a lot of time to complete. Due to this, the command
gluster nfs-ganesha disable
will timeout.Workaround: Ifgluster nfs-ganesha disable
is encounters a timeout, then perform thepcs status
and check whether any resource is in failed state. Then perform the cleanup for that resource using following command:# pcs resource --cleanup <resource id>
Re-execute thegluster nfs-ganesha disable
command. - BZ#1328581
- After removing a file, the nfs-ganesha process does a lookup on the removed entry to update the attributes in case of any links present. Due to this, as the file is deleted, lookup will fail with ENOENT resulting in a misleading log message in
gfapi.log
.This is an expected behaviour and there is no functionality issue here. The log message needs to be ignored in such cases. - BZ#1259402
- When vdsmd and abrt are installed alongside each other, vdsmd overwrites abrt core dump configuration in
/proc/sys/kernel/core_pattern
. This prevents NFS-Ganesha from generating core dumps.Workaround: Disable core dumps in/etc/vdsm/vdsm.conf
by settingcore_dump_enable
tofalse
, and then restart theabrt-ccpp
service:# systemctl restart abrt-ccpp
- BZ#1257548
nfs-ganesha
service monitor script which triggers IP failover runs periodically every 10 seconds. The ping-timeout of the glusterFS server (after which the locks of the unreachable client gets flushed) is 42 seconds by default. After an IP failover, some locks may not get cleaned by the glusterFS server process, hence reclaiming the lock state by NFS clients may fail.Workaround: It is recommended to set thenfs-ganesha
service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec).Hence, either you must decrease the network ping-timeout using the following command:# gluster volume set <volname> network.ping-timeout <ping_timeout_value>
or increase nfs-service monitor interval time using the following commands:# pcs resource op remove nfs-mon monitor
# pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value>
- BZ#1226874
- If NFS-Ganesha is started before you set up an HA cluster, there is no way to validate the cluster state and stop NFS-Ganesha if the set up fails. Even if the HA cluster set up fails, the NFS-Ganesha service continues running.Workaround: If HA set up fails, run service nfs-ganesha stop on all nodes in the HA cluster.
- BZ#1470025
- PCS cluster IP resources may enter FAILED state during failover/failback of VIP in NFS-Ganesha HA cluster. As a result, VIP is inaccessible resulting in mount failures or system freeze.Workaround: Clean up the resource that failed, using the following command:
# pcs resource cleanup resource-id
- BZ#1461507
- When duplicate request cache (DRC) entries maintained by NFS-Ganesha server reaches the high watermark limit, the server tries to reclaim old entries which may still be in use. As a result, every time the server cannot reclaim an entry, it logs a warning. This may flood the log file at times if there are too many requests being processed.Workaround: Increase the DRC limit by executing the following steps:
- Edit the
/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
file and add the following parameters in NFS_Core_Param block:NFS_Core_Param { DRC_TCP_Hiwat = 1024; #default is 256 }
- Restart the NFS-Ganesha process on all the nodes in the NFS-Ganesha cluster using the following command:
# systemctl restart nfs-ganesha
- BZ#1474716
- After a reboot, systemd may interpret NFS-Ganesha to be in STARTED state when it is not running.Workaround: Manually start the NFS-Ganesha process.
- BZ#1473280
- The command
gluster nfs-ganesha disable
when executed stops the NFS-Ganesha service. In case of pre exported entries, NFS-Ganesha may enter FAILED state.Workaround: Restart the NFS-Ganesha process after failure and rerun the following command:# gluster nfs-ganesha disable
Issues related to Object Store
- The GET and PUT commands fail on large files while using Unified File and Object Storage.Workaround: You must set the
node_timeout=60
variable in the proxy, container, and the object server configuration files.
Issues related to Red Hat Gluster Storage Volumes
- BZ#1286050
- On a volume, when read and write operations are in progress and simultaneously a rebalance operation is performed followed by a remove-brick operation on that volume, then the
rm -rf
command fails on a few files. - BZ#1224153
- When a brick process dies, BitD tries to read from the socket used to communicate with the corresponding brick. If it fails, BitD logs the failure to the log file. This results in many messages in the log files, leading to the failure of reading from the socket and an increase in the size of the log file.
- BZ#1224162
- Due to an unhandled race in the RPC interaction layer, brick down notifications may result in corrupted data structures being accessed. This can lead to NULL pointer access and segfault.Workaround: When the
Bitrot
daemon (bitd) crashes (segfault), you can usevolume start VOLNAME force
to restartbitd
on the node(s) where it crashed. - BZ#1227672
- A successful scrub of the filesystem (objects) is required to see if a given object is clean or corrupted. When a file gets corrupted and a scrub has not been run on the filesystem, there is a good chance of replicating corrupted objects in cases when the brick holding the good copy was offline when I/O was performed.Workaround: Objects need to be checked on demand for corruption during healing.
- BZ#1241336
- When an Red Hat Gluster Storage node is shut down due to power failure or hardware failure, or when the network interface on a node goes down abruptly, subsequent gluster commands may time out. This happens because the corresponding TCP connection remains in the
ESTABLISHED
state. You can confirm this by executing the following command:ss -tap state established '( dport = :24007 )' dst IP-addr-of-powered-off-RHGS-node
Workaround: Restartglusterd
service on all other nodes. - BZ#1223306
gluster volume heal VOLNAME info
shows stale entries, even after the file is deleted. This happens due to a rare case when the gfid-handle of the file is not deleted.Workaround: On the bricks where the stale entries are present, for example,<gfid:5848899c-b6da-41d0-95f4-64ac85c87d3f>
, check if the file'sgfid
handle is not deleted by running the following command and checking whether the file appears in the output, for example,<brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f
.# find <brick-path>/.glusterfs -type f -links 1
If the file appears in the output of this command, delete the file using the following command.# rm <brick-path>/.glusterfs/58/48/5848899c-b6da-41d0-95f4-64ac85c87d3f
Issues related to Samba
- BZ#1419633
- CTDB fails to start on those setups where the real time schedulers have been disabled. One such example is where vdsm is installed.Workaround: Enable real time schedulers by
echo 950000 > /sys/fs/cgroup/cpu,cpuacct/system.slice/cpu.rt_runtime_us
and then restart the ctdb service. For more information, refer the cgroup section of Red Hat Enterprise Linux administration guide, https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/System_Administrators_Guide/index.html - BZ#1379444
- Sharing of subdirectories of Gluster volume does not work if shadow_copy2 vfs module is also used. This is because shadow_copy2 checks on local filesystem for path being shared and Gluster volumes are remote filesystems accessed using libgfapi.Workaround: Add
shadow:mountpoint = /
in share section ofsmb.conf
to bypass this check. - BZ#1329718
- Snapshot volumes are read-only. All snapshots are made available as directories inside .snaps. Even though snapshots are read-only the directory attribute of snapshots is same as the directory attribute of root of snapshot volume, which can be read-write. This can lead to confusion, because Windows will assume that the snapshots directory is read-write. Restore previous version option in file properties gives open option. It will open the file from the corresponding snapshot. If opening of the file also create temp files (e.g. Microsoft Word files), the open will fail. This is because temp file creation will fail because snapshot volume is read-only.Workaround: Copy such files to a different location instead of directly opening them.
- BZ#1322672
- When vdsm and abrt's ccpp addon are installed alongside each other, vdsmd overwrites abrt's core dump configuration in /proc/sys/kernel/core_pattern. This prevents Samba from generating core dumps due to SELinux search denial on new coredump location set by vdsmd.Workaround: To workaround this issue, execute the following steps:
- Disable core dumps in
/etc/vdsm/vdsm.conf
:core_dump_enable = false
- Restart the abrt-ccpp and smb services:
# systemctl restart abrt-ccpp # systemctl restart smb
- BZ#1300572
- Due to a bug in the Linux CIFS client, SMB2.0+ connections from Linux to Red Hat Gluster Storage currently will not work properly. SMB1 connections from Linux to Red Hat Gluster Storage, and all connections with supported protocols from Windows continue to work.Workaround: If practical, restrict Linux CIFS mounts to SMB version 1. The simplest way to do this is to not specify the
vers mount
option, since the default setting is to use only SMB version 1. If restricting Linux CIFS mounts to SMB1 is not practical, disable asynchronous I/O in Samba by settingaio read size
to 0 insmb.conf
file. Disabling asynchronus I/O may have performance impact on other clients - BZ#1282452
- Attempting to upgrade to ctdb version 4 fails when ctdb2.5-debuginfo is installed, because the ctdb2.5-debuginfo package currently conflicts with the samba-debuginfo package.Workaround: Manually remove the ctdb2.5-debuginfo package before upgrading to ctdb version 4. If necessary, install samba-debuginfo after the upgrade.
- BZ#1164778
- Any changes performed by an administrator in a Gluster volume's share section of
smb.conf
are replaced with the default Gluster hook scripts settings when the volume is restarted.Workaround: The administrator must perform the changes again on all nodes after the volume restarts.
Issues related to SELinux
- BZ#1256635
- Red Hat Gluster Storage does not currently support SELinux Labeled mounts.On a FUSE mount, SELinux cannot currently distinguish file systems by subtype, and therefore cannot distinguish between different FUSE file systems (BZ#1291606). This means that a client-specific policy for Red Hat Gluster Storage cannot be defined, and SELinux cannot safely translate client-side extended attributes for files tracked by Red Hat Gluster Storage.A workaround is in progress for NFS-Ganesha mounts as part of BZ#1269584. When complete, BZ#1269584 will enable Red Hat Gluster Storage support for NFS version 4.2, including SELinux Labeled support.
- BZ#1291194 , BZ#1292783
- Current SELinux policy prevents ctdb's 49.winbind event script from executing smbcontrol. This can create inconsistent state in winbind, because when a public IP address is moved away from a node, winbind fails to drop connections made through that IP address.
Issues related to Sharding
- BZ#1332861
- Sharding relies on block count difference before and after every write as gotten from the underlying file system and adds that to the existing block count of a sharded file. But XFS' speculative preallocation of blocks causes this accounting to go bad as it may so happen that with speculative preallocation the block count of the shards after the write projected by xfs could be greater than the number of blocks actually written to.Due to this, the block-count of a sharded file might sometimes be projected to be higher than the actual number of blocks consumed on disk. As a result, commands like
du -sh
might report higher size than the actual number of physical blocks used by the file.
General issues
- GFID mismatches cause errors
- If files and directories have different GFIDs on different back-ends, the glusterFS client may hang or display errors. Contact Red Hat Support for more information on this issue.
- BZ#1236025
- The time stamp of files and directories changes on snapshot restore, resulting in a failure to read the appropriate change logs.
glusterfind pre
fails with the following error:historical changelogs not available
. Existing glusterfind sessions fail to work after a snapshot restore.Workaround: Gather the necessary information from existing glusterfind sessions, remove the sessions, perform a snapshot restore, and then create new glusterfind sessions. - BZ#1260119
glusterfind
command must be executed from one node of the cluster. If all the nodes of cluster are not added inknown_hosts
list of the command initiated node, thenglusterfind create
command hangs.Workaround: Add all the hosts in peer including local node toknown_hosts
.- BZ#1058032
- While migrating VMs, libvirt changes the ownership of the guest image, unless it detects that the image is on a shared filesystem and the VMs can not access the disk images as the required ownership is not available.Workaround: Before migration, power off the VMs. When migration is complete, restore the ownership of the VM Disk Image (107:107) and start the VMs.
- BZ#1127178
- If a replica brick goes down and comes up when
rm -rf
command is executed, the operation may fail with the message Directory not empty.Workaround: Retry the operation when there are no pending self-heals. - BZ#1449638
- The flexible I/O tester tool sends write calls of 1 Byte. For a sequential write, if a write call on a dispersed volume is not aligned to strip size, it first reads the whole stripe and then calculates the erasure code and then writes it back on the bricks. As a result, these Read calls have their own latency thus causing slow write performance.Workaround: There is currently no known workaround for this issue.
- BZ#1460629
- When the command
rm -rf
is executed on the parent directory, which has a pending self-heal entry involving purging files from a sink brick, the directory and files awaiting heal may not be removed from the sink brick. Since, the readdir for therm -rf
will be served from the source brick, the file pending entry heal is not deleted from the sink brick. Any data or metadata which is pending heal on such a file are displayed in the ouput of the commandheal-info
, until the issue is fixed.Workaround: If the files and parent directory are not present on other bricks, delete them from the sink brick. This ensures that they are no longer listed in the next 'heal-info' output. - BZ#1462079
- Due to incomplete error reporting, statedump is not generated after executing the following command:
# gluster volume statedump volume client host:port
Workaround: Verify that thehost:port
is correct in the command.The resulting statedump file(s) are placed in/var/run/gluster
on the host running the gfapi application.
Issues related to Red Hat Gluster Storage AMI
- BZ#1267209
- The redhat-storage-server package is not installed by default in a Red Hat Gluster Storage Server 3 on Red Hat Enterprise Linux 7 AMI image. package is not installed by default in a Red Hat Gluster Storage Server 3 on Red Hat Enterprise Linux 7 AMI image.Workaround: It is highly recommended to manually install this package using yum.
# yum install redhat-storage-server
The redhat-storage-server package primarily provides the/etc/redhat-storage-release
file, and sets the environment for the storage node. package primarily provides the/etc/redhat-storage-release
file, and sets the environment for the storage node.