Este contenido no está disponible en el idioma seleccionado.

Chapter 2. What Changed in this Release?


2.1. What's New in this Release?

This section describes the key features and enhancements in the Red Hat Gluster Storage 3.5 release.
  • Red Hat Gluster Storage has been updated to upstream glusterfs version 6.
  • Samba has been updated to upstream version 4.9.8.
  • NFS-Ganesha is updated to upstream version 2.7.3
  • NFS version 4.1 is now supported.
  • Directory contents are now read in configurable chunks so that very large directory listings can start to be served faster, instead of needing to wait for the whole directory to be read before serving to NFS-Ganesha clients.
  • Red Hat Gluster Storage now provides the option of its own ctime attribute as an extended attribute across replicated sub-volumes, to avoid the consistency issues between replicated and distributed bricks that occurred when using file system based ctime, such as after self-healing occurred.
  • Bricks in different subvolumes can now be different sizes, and Gluster's algorithms account for this when determining placement ranges for files. Available space algorithms are updated to work better for heterogeneous brick sizes. Same size bricks are still needed for bricks belonging to the same replica set or disperse set.
  • The default maximum port number for bricks is now 60999 instead of 65535.
  • Administrators can now prevent nodes with revoked certificates from accessing the cluster by adding a banned node's certificate to a Certificate Revocation List file, and specifying the file's path in the new ssl.crl-path volume option.
  • Gluster-ansible can now configure IPv6 networking for Red Hat Gluster Storage.
  • The storage.fips-mode-rchecksum volume option is now enabled by default for new volumes on clusters with an op-version of 70000 or higher.
  • Configuring geo-replication was a lengthy and error prone process. Support for geo-replication is now provided by gdeploy, automating configuration and reducing error in this process.
  • A new API, glfs_set_statedump_path, lets users configure the directory to store statedump output. For example, the following call sets the /tmp directory as the statedump path:
    glfs_set_statedump_path(fs2, "/tmp");
    Copy to Clipboard Toggle word wrap
  • New configuration options have been added to enable overriding of umask. The storage.create-directory-mask and storage.create-mask options restrict file mode to the given mask for directories and other files respectively. The storage.force-directory-mode and storage.force-create-mode options enforce the presence of the given mode bits for directories and other files respectively. Note that these mode constraints are maintained as long as the given option values are in effect and do not only apply to file creation.
  • NFS-Ganesha now receives the upcall notifications needed to maintain active-active high availability configurations via asynchronous callbacks, which avoids the need for continuous polling and reduces CPU and memory usage.
  • A new dbus command is available for obtaining access control lists and other export information from the NFS-Ganesha server.
  • The storage.reserve option can now reserve available space in size in addition to reserving in percentage.
  • Asynchronous I/O operations were impeded by a bottleneck in the workflow at the point of notification of successful completion. The bottleneck has been removed and asynchronous I/O operations now perform better.
  • Performance improvements compared to Red Hat Gluster Storage 3.4

    NFS-Ganesha v4.1 mounts

    • 6-10 times improvement of ls -l/stat/chmod on small files on replica 3 and arbiter volumes.
    • Improvements for metadata-heavy operations that improve small file performance for dispersed volumes:
      • chmod - 106%
      • creates - 27%
      • stat - 808%
      • reads - 130%
      • appends - 52%
      • rename -36%
      • delete-renamed - 70%
      • mkdir - 52%
      • rmdir - 58%
    • Large file sequential write performance improvement of 47% for dispersed volumes
    • Large file sequential read/write improvements (20 and 60% respectively) for replica 3 volumes

    Gluster-FUSE mounts

    • Large file sequential read improvements (20%) for arbiter volumes
    • Small file mkdir/rmdir/rename improvements (24/25/10 % respectively) for dispersed volumes
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat