Chapter 2. What Changed in this Release?


2.1. What's New in this Release?

This section describes the key features and enhancements in the Red Hat Gluster Storage 3.2 release.
Improved Performance with Compound File Operations
Administrators can now enable compound file operations on volumes with the cluster.use-compound-fops volume option. When this option is enabled, write transactions are compounded resulting in less network activity, improving performance.
md-cache Performance Enhancement
In order to improve the performance of directory operations of Red Hat Gluster Storage volumes, the maximum metadata (stat, xattr) caching time on the client side can now be increased to 10 minutes. This does not compromise the consistency of the cache. The change improves the performance of directory operations of Red Hat Gluster Storage volumes. Significant performance improvements can be achieved in the following workloads by enabling metadata caching:
  • Listing of directories (recursive)
  • Creating files
  • Deleting files
  • Renaming files
Parallel I/O for Dispersed Volumes
The new performance.client-io-threads volume option enables up to 16 threads to be used in parallel on dispersed (erasure-coded) volumes. Threads are created automatically based on client workload when this option is enabled.
Enhancements to Bitrot
The new ondemand option is introduced in this release. With this option, you can start the scrubbing process on demand. When you run gluster volume bitrot <VOLNAME> scrub ondemand command, the scrubber will start crawling the file system immediately.
Obtaining Node Information
The get-state command is introduced to obtain node information. The command writes information about the specified node to a specified file. Using the command line interface, external applications can invoke the command on all nodes of the trusted storage pool, and parse and collate the data obtained from all these nodes to get an easy-to-use and complete picture of the state of the trusted storage pool in a machine parseable format.
Arbitrated Replicated Volumes
An arbitrated replicated volume, or arbiter volume, is a three-way replicated volume where every third brick is a special type of brick called an arbiter. Arbiter bricks do not store file data; they only store file names, structure, and metadata. The arbiter uses client quorum to compare this metadata with that of the other nodes to ensure consistency in the volume and prevent split-brain conditions. This maintains the consistency of three-way replication but requires far less storage.
Multithreaded Self-heal for Erasure Coded Volume
With Red Hat Gluster Storage 3.2, multiple threads on every brick process can scan indices in parallel and trigger heal for those at the same time. It is supported on disperse and distribute-disperse volumes. Increasing the number of heals impacts I/O performance. The disperse.shd-max-threads volume option can be used to configure the number of entries that can be self healed in parallel on each disperse. The disperse.shd-wait-qlength volume option can be used to configure the maximum number of entries that must be kept in the queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory the self-heal daemon process can use for keeping the next set of entries that need to be healed.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Configuring_Volume_Options.
gdeploy Enhancements
The gdeploy tool automates system administration tasks such as creating bricks and setting up and mounting volumes. When setting up a fresh cluster, gdeploy could be the preferred choice of cluster setup, as manually executing numerous commands can be error prone. With the Red Hat Gluster Storage 3.2 release, gdeploy now provides NFS-Ganesha, Samba, and SSL setup support in volumes.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Red_Hat_Storage_Volumes-gdeploy .
glusterd Enhancements
The Red Hat Gluster Storage 3.2 release includes several bug fixes and enhancements for glusterd which now enables to configure larger number of volumes in a trusted storage pool.
Granular Entry Self-heal
When the glanular entry self-heal option is enabled, it stores more granular information about the entries which were created or deleted from a directory while a brick in a replica was down. This helps in faster self-heal of directories, especially in use cases where directories with large number of entries are modified by creating or deleting entries. If this option is disabled, it only stores that the directory needs heal without information about what entries within the directories need to be healed, and thereby requires entire directory crawl to identify the changes.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Configuring_Volume_Options .
NFS-Ganesha Enhancements
The NFS-Ganesha package is rebased to the upstream version 2.4.1, which provides several important bug fixes and enhancements. This rebase includes the following enhancements:
  • cache_inode replaced with stackable FSAL_MDCACHE.
  • support_ex FSAL API extensions to allow associating file descriptors or other FSAL specific information with state_t objects.
  • abort() on ENOMEM rather than attempt to continue.
  • Proper handling of NFS v3 (NLM) blocked locks.
  • netgroup cache.
  • Cache open owners.
  • Various bug fixes, memory leaks and refcount issue resolution.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#sect-NFS_Ganesha
Geo-replication Enhancements
Resetting synchronization time while deleting the geo-replication session: The geo-replication delete command retains the information about the last synchronized time. Due to this, if the same geo-replication session is recreated, then the synchronization will continue from the time where it was left before deleting the session. For the geo-replication session to not maintain any details about the deleted session, the reset-sync-time option must be used with the delete command. Now, when the session is recreated, it starts synchronization from the beginning just like a new session.
Simplified Secure Geo-replication Setup: The internal mountbroker feature is now enhanced to set the necessary SELinux rules and permissions, create the required directory, and update glusterd.vol files while setting up the secure geo-replication slave. This simplifies the existing way of setting up secure geo-replication slave.
Geo-replication Changelog Log Level: You can now set the log level for the geo-replication changelog. The default log level is set to INFO.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Managing_Geo-replication
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat