7.25. cluster and gfs2-utils


Updated cluster and gfs2-utils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6.
The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market.

Bug Fixes

BZ#785866
With this update, a minor typographical error has been fixed in the /usr/share/cluster/cluster.rng.in.head RELAX NG schema.
BZ#803477
Previously, the fsck.gfs2 program printed irrelevant error messages when reclaiming free metadata blocks. These messages could have been incorrectly understood as file system errors. With this update, these messages are no longer displayed.
BZ#814807
The master_wins implementation of the qdiskd daemon was not sufficiently fast to hand over the master status during the ordered shutdown. Consequently, a temporary loss of quorum in the cluster could have occurred. With this update, master_wins has been modified to operate more quickly.
BZ#838047
Previously, the master_wins implementation of the qdiskd daemon did not check strictly for errors in the /etc/cluster/cluster.conf file. Consequently, with several incorrect options in cluster.conf, two quorate partitions could have been created at the same time. With this update, master_wins has been modified to perform strict error checking to avoid the creation of multiple quorate partitions.
BZ#838945
Prior to this update, an overly long cluster name in the /etc/cluster/cluster.conf file could cause a buffer overflow when running the fsck.gfs2 utility on a GFS2 file system with a corrupt super block. With this update, the cluster name is truncated appropriately when the super block is being rebuilt. Now, the buffer overflow condition no longer occurs in the described case.
BZ#839241
Under certain circumstances, the cman cluster manager did not propagate two internal values across configuration reloads. Consequently, runtime inconsistencies could occur. This bug has been fixed, and the aforementioned error no longer occurs. Also, a corner case memory leak has been fixed.
BZ#845341
Prior to this update, the fenced daemon created the /var/log/cluster/fenced.log file with world readable permissions. With this update, fenced has been modified to set more strict security permissions for its log file. Also, permissions of an existing log file are automatically corrected if necessary.
BZ#847234
Previously, an insufficient buffer length limitation did not allow long configuration lines in the /etc/cluster/cluster.conf configuration file. Consequently, a long entry in the file caused the corosync utility to terminate unexpectedly with a segmentation fault. With this update, the length limit has been extended. As a result, the segmentation fault no longer occurs in this situation.
BZ#853180
When a GFS2 file system was mounted with the lock_nolock option enabled, the cman cluster manager incorrectly checked the currently used resources. Consequently, cman failed to start. This bug has been fixed, and cman now starts successfully in the described case.
BZ#854032
In certain corner cases, triggered especially when shutting down all cluster nodes at the same time, the cluster daemons failed to quit within the cman shutdown limit (10 seconds). Consequently, the cman cluster manager declared a shutdown error. With this update, the default shutdown timeout has been increased to 30 seconds to prevent the shutdown error.
BZ#857952
Under rare circumstances, the fenced daemon polled an incorrect file descriptor from the cman cluster manager. Consequently, fenced entered a loop and the cluster became unresponsive. This bug has been fixed, and the aforementioned error no longer occurs.
BZ#861340
The fenced daemon is usually started before the messagebus (D-BUS) service, which has no harmful operational effects. Previously, this behavior was recorded as an error message in the /var/log/cluster/fenced.log file. To avoid confusion, this error message is now entered into /var/log/cluster/fenced.log only when the log level is set to debugging.
BZ#862847
Previously, the mkfs.gfs2 -t command accepted non-standard characters, like slash (/), in the lock table name. Consequently, only the first cluster node was able to mount a GFS2 file system successfully. The next node attempting to mount a GFS2 file system became unresponsive. With this update, a more strict validation of lock table names has been introduced. As a result, cluster nodes no longer hang when special characters are used in lock table.
BZ#887787
Previously, when the client using the cman API called the cman_stop_notification() function after cman was already closed, the client terminated with the SIGPIPE signal. With this update, the underlying source code has been modified to address this issue, and the MSG_NOSIGNAL message is now displayed to warn the user in the described scenario.
BZ#888053
Prior to this update, the gfs2_convert tool was unable to handle certain corner cases when converting between GFS1 and GFS2 file systems. Consequently, the converted GFS2 file system contained errors. With this update, gfs2_convert has been fixed to detect these corner cases and adjust the converted file system accordingly

Enhancements

BZ#661764
The cman cluster manager is now supported with the bonding mode options 0, 1, and 2. Prior to this update, only bonding mode 1 was supported.
BZ#738704
This update adds support for clusters utilizing the Red Hat Enterprise Virtualization Manager native shared storage between nodes.
BZ#786118
The hostname aliases from the /etc/hosts file are now accepted as cluster node names across cluster applications.
BZ#797952
A new tool, fence_check, has been added to provide a method to test the fence configuration in a non disruptive way. The tool has been designed to run via the crontab utility for regular monitoring of fence devices.
BZ#821016
This update enables passing additional command line options to the dlm_controld daemon using the /etc/sysconfig/cman file.
BZ#842370
The Distributed Lock Manager (DLM) now allows tuning of DLM hash table sizes from the /etc/sysconfig/cman file. The following parameters can be set in the /etc/sysconfig/cman file:
DLM_LKBTBL_SIZE=<size_of_table>
DLM_RSBTBL_SIZE=<size_of_table>
DLM_DIRTBL_SIZE=<size_of_table>
which, in turn, modifies the values in the following files respectively:
/sys/kernel/config/dlm/cluster/lkbtbl_size
/sys/kernel/config/dlm/cluster/rsbtbl_size
/sys/kernel/config/dlm/cluster/dirtbl_size
BZ#857299
Previously, it was not possible to modify the default TCP port (21064) of the Distributed Lock Manager (DLM). With this update, the DLM_TCP_PORT configuration parameter has been added into the /etc/sysconfig/cman file. As a result, the DLM TCP port can be manually configured.
BZ#860048
The fsck.gfs2 program now checks for formal mismatches between disk inode numbers and directory entries in the GFS2 file system.
BZ#860847
This update adds support for two and four node clusters utilizing the rgmanager daemon with the rrp_mode option enabled.
BZ#878196
This update adds support for clusters utilizing the VMware's VMDK (Virtual Machine Disk) disk image technology with the multi-writer option. This allows using VMDK-based storage with the multi-writer option for clustered file systems such as GFS2.
All users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.