Chapter 2. What Changed in this Release?
2.1. What's New in this Release? Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
This section describes the key features and enhancements in the Red Hat Gluster Storage 3.2 release.
- Improved Performance with Compound File Operations
- Administrators can now enable compound file operations on volumes with the
cluster.use-compound-fopsvolume option. When this option is enabled, write transactions are compounded resulting in less network activity, improving performance.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Configuring_Volume_Options. - md-cache Performance Enhancement
- In order to improve the performance of directory operations of Red Hat Gluster Storage volumes, the maximum metadata (stat, xattr) caching time on the client side can now be increased to 10 minutes. This does not compromise the consistency of the cache. The change improves the performance of directory operations of Red Hat Gluster Storage volumes. Significant performance improvements can be achieved in the following workloads by enabling metadata caching:
- Listing of directories (recursive)
- Creating files
- Deleting files
- Renaming files
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#sect-Directory_Operations - Parallel I/O for Dispersed Volumes
- The new
performance.client-io-threadsvolume option enables up to 16 threads to be used in parallel on dispersed (erasure-coded) volumes. Threads are created automatically based on client workload when this option is enabled.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Red_Hat_Storage_Volumes-Creating_Dispersed_Volumes_1. - Enhancements to Bitrot
- The new
ondemandoption is introduced in this release. With this option, you can start the scrubbing process on demand. When you rungluster volume bitrot <VOLNAME> scrub ondemandcommand, the scrubber will start crawling the file system immediately.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Detecting_Data_Corruption. - Obtaining Node Information
- The
get-statecommand is introduced to obtain node information. The command writes information about the specified node to a specified file. Using the command line interface, external applications can invoke the command on all nodes of the trusted storage pool, and parse and collate the data obtained from all these nodes to get an easy-to-use and complete picture of the state of the trusted storage pool in a machine parseable format.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#obtaining_node_information. - Arbitrated Replicated Volumes
- An arbitrated replicated volume, or arbiter volume, is a three-way replicated volume where every third brick is a special type of brick called an arbiter. Arbiter bricks do not store file data; they only store file names, structure, and metadata. The arbiter uses client quorum to compare this metadata with that of the other nodes to ensure consistency in the volume and prevent split-brain conditions. This maintains the consistency of three-way replication but requires far less storage.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Creating_Arbitrated_Replicated_Volumes.
- Multithreaded Self-heal for Erasure Coded Volume
- With Red Hat Gluster Storage 3.2, multiple threads on every brick process can scan indices in parallel and trigger heal for those at the same time. It is supported on disperse and distribute-disperse volumes. Increasing the number of heals impacts I/O performance. The
disperse.shd-max-threadsvolume option can be used to configure the number of entries that can be self healed in parallel on each disperse. Thedisperse.shd-wait-qlengthvolume option can be used to configure the maximum number of entries that must be kept in the queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory the self-heal daemon process can use for keeping the next set of entries that need to be healed.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Configuring_Volume_Options. - gdeploy Enhancements
- The gdeploy tool automates system administration tasks such as creating bricks and setting up and mounting volumes. When setting up a fresh cluster, gdeploy could be the preferred choice of cluster setup, as manually executing numerous commands can be error prone. With the Red Hat Gluster Storage 3.2 release, gdeploy now provides NFS-Ganesha, Samba, and SSL setup support in volumes.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Red_Hat_Storage_Volumes-gdeploy .
- glusterd Enhancements
- The Red Hat Gluster Storage 3.2 release includes several bug fixes and enhancements for glusterd which now enables to configure larger number of volumes in a trusted storage pool.
- Granular Entry Self-heal
- When the glanular entry self-heal option is enabled, it stores more granular information about the entries which were created or deleted from a directory while a brick in a replica was down. This helps in faster self-heal of directories, especially in use cases where directories with large number of entries are modified by creating or deleting entries. If this option is disabled, it only stores that the directory needs heal without information about what entries within the directories need to be healed, and thereby requires entire directory crawl to identify the changes.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Configuring_Volume_Options .
- NFS-Ganesha Enhancements
- The NFS-Ganesha package is rebased to the upstream version 2.4.1, which provides several important bug fixes and enhancements. This rebase includes the following enhancements:
- cache_inode replaced with stackable FSAL_MDCACHE.
- support_ex FSAL API extensions to allow associating file descriptors or other FSAL specific information with state_t objects.
- abort() on ENOMEM rather than attempt to continue.
- Proper handling of NFS v3 (NLM) blocked locks.
- netgroup cache.
- Cache open owners.
- Various bug fixes, memory leaks and refcount issue resolution.
For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#sect-NFS_Ganesha - Geo-replication Enhancements
- Resetting synchronization time while deleting the geo-replication session: The geo-replication delete command retains the information about the last synchronized time. Due to this, if the same geo-replication session is recreated, then the synchronization will continue from the time where it was left before deleting the session. For the geo-replication session to not maintain any details about the deleted session, the reset-sync-time option must be used with the delete command. Now, when the session is recreated, it starts synchronization from the beginning just like a new session.Simplified Secure Geo-replication Setup: The internal mountbroker feature is now enhanced to set the necessary SELinux rules and permissions, create the required directory, and update glusterd.vol files while setting up the secure geo-replication slave. This simplifies the existing way of setting up secure geo-replication slave.Geo-replication Changelog Log Level: You can now set the log level for the geo-replication changelog. The default log level is set to INFO.For more information, see the Red Hat Gluster Storage 3.2 Administration Guide for details: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Managing_Geo-replication