4.90. rgmanager
Updated rgmanager packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 5.
The rgmanager contain the Red Hat Resource Group Manager, which is used to create and manage high-availability server applications in the event of system downtime.
Bug Fixes
- BZ#865462
- Previous attempts to use an IPv6 address in the cluster configuration with upper case letters returned an error message. Consequently, this caused attempts to start a cluster service in this manner to fail. With this update, the IP address is set independently of upper or lower case letters, and attempts to start the service with both cases functions as expected.
- BZ#869705
- Previously, SAP instances started by the SAPInstance cluster resource agent inherited limits on system resources, for example, the maximum number of open file descriptors for a root user. Those limits could not be applied by PAM due to the way that SAP processes were started by the cluster. With this update, SAPInstance resource agent takes limits configured in the /usr/sap/sapservices/ directory into account. If no limits are specified in the /usr/sap/sapservices/ directory, then safe default limits are applied.
- BZ#879029
- When a service was configured with a recoverable resource, such as nfsclient, a failure of that client correctly triggered the recovery function. However, even if the recovery was successful, rgmanager still stopped and recovered the service. This was caused because the do_status function recorded the rn_last_status function after the first failure, then ran recovery but did not record the new rn_last_status function. This update sets the rn_last_status function to 0 after the resource is recovered. Thus, rgmanager recovers the resource, and leaves the service running afterwards.
- BZ#883860
- Previously, certain man pages from the rgmanager packages had executable flags set and were installed with mode 0755, which was incorrect. Currently, the man pages are correctly installed with mode 0644, which corrects this issue.
- BZ#889098
- When the /etc/cluster/cluster.conf file was modified and distributed to the other nodes using the ccs_tool update, the file was changed on all the nodes, but the change was not applied in the cluster. This happened because a bug in the code caused a new configuration event to be queued when a configuration change was detected while processing configuration event, which caused even more events to be queued, possibly infinitely. Also, under certain circumstances, rgmanager issued a call to get cluster information without proper initialization of internal structures. With this update, the aforementioned problems has been fixed. Each configuration update event is queued only once and configuration changes are now applied in the cluster as expected.
- BZ#907898
- Due to an incorrect SELinux context in the /var/lib/nfs/statd/sm/ directory, the rpc.statd daemon was unable to start. This problem only happened if the cluster included NFS mounts, and therefore the /var/lib/nfs/statd/sm/ directory contained files. This update passes the "-Rdpf" instead of "-af" flags to the "cp" command when copying files to the /var/lib/nfs/statd/sm/ directory, so that the SELinux context is inherited from the target directory, and not preserved from the files being copied.
- BZ#909459
- Previously, in central processing mode, rgmanager did not handle certain inter-service dependencies correctly. If a service was dependent on another service that ran on the same cluster node, the dependent service became unresponsive during the service failover and remained in the recovering state. With this update, rgmanager has been modified to check a service state during failover and stop the service if it is dependent on the service that is failing over. Resource Group Manager then attempts to start this dependent service on other nodes.
- BZ#962376
- When using High Availability Logical Volume Management agents (HA-LVM), failure of some of the physical volumes (PV) in the volume group (VG) resulted in the agent calling "vgreduce --removemissing --force [vg]", thus removing the missing PVs and any logical volumes (LV) that were on it. While this was helpful in the case of recovering from the loss of a mirror leg, when only using linear LVs, it is problematic, especially if another node is having no trouble accessing the storage. This update adds the "--mirrorsonly" option to the "vgreduce --removemissing" calls in the LVM agents, and HA-LVM only removes missing PVs on stop when they belong to mirrors.
- BZ#968322
- A general protection fault in the malloc_consolidate function caused rgmanager to terminate unexpectedly with a segmentation fault during a status check. This update fixes some instances where a very unlikely NULL pointer dereference could occur, and rgmanager no longer crashes in this situation.
Enhancements
- BZ#670024
- Previous versions of the Oracle Resource Agent were only tested against Oracle 10. With this update, support for the Oracle Database 11g has been added to the oracledb, orainstance, and oralistener resource agents.
- BZ#841142
- This update fixes a non-critical typing error in the ASEHAagent.sh resource agent.
Users of rgmanager are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
Updated rgmanager packages that fix one bug are now available for Red Hat Enterprise Linux 5.
The rgmanager packages contain the Red Hat Resource Group Manager, which allows the ability to create and manage high-availability server applications in the event of system downtime.
Bug Fix
- BZ#912625
- Previously, in central processing mode, rgmanager did not handle certain inter-service dependencies correctly. If a service was dependent on another service that ran on the same cluster node, the dependent service became unresponsive during the service failover and remained in the recovering state. With this update, rgmanager has been modified to check a service state during failover and stop the service if it is dependent on the service that is failing over. Resource Group Manager then attempts to start this dependent service on other nodes.
Users of rgmanager are advised to upgrade to these updated packages, which fix this bug.
Updated rgmanager packages that fix a bug are now available for Red Hat Enterprise Linux 5.
The rgmanager packages contain the Red Hat Resource Group Manager, which is used to create and manage high-availability server applications in the event of system downtime.
Bug Fix
- BZ#967456
- Previously, the NFS resource agents preserved SELinux context when copying files to the /var/lib/nfs/sm/ directory. As a result, files that were copied did not inherit the SELinux context of /var/lib/nfs/sm/, causing AVC denial messages to be returned. These messages prevented proper operation of the resource agents. This bug has been fixed and the NFS resource agents no longer preserve the SELinux context of files copied to /var/lib/nfs/sm/.
Users of rgmanager are advised to upgrade to these updated packages, which fix this bug.
Updated rgmanager packages that add one enhancement are now available for Red Hat Enterprise Linux 5.
The rgmanager packages contain the Red Hat Resource Group Manager, which allows users to create and manage high-availability server applications in the event of system downtime.
Enhancement
- BZ#964991
- With this update, support for Oracle Database 11g has been added to the oracledb, orainstance, and oralistener resource agents.
Users of rgmanager are advised to upgrade to these updated packages, which add this enhancement.
Updated rgmanager packages that fix one bug are now available for Red Hat Enterprise Linux 5.
The rgmanager packages contain the Red Hat Resource Group Manager, which is used to create and manage high-availability server applications in the event of system downtime.
Bug Fix
- BZ#1004482
- Previously, if a device failed in a non-redundant (i.e. not mirror or RAID) logical volume that was controlled by HA-LVM, the entire logical volume could be automatically deleted from the volume group. Now, if a non-redundant logical volume suffers a device failure, HA-LVM fails to start the service rather than forcing the removal of failed PVs from the volume group, thus fixing the bug.
Users of rgmanager are advised to upgrade to these updated packages, which fix this bug.
Updated rgmanager packages that fix one bug are now available for Red Hat Enterprise Linux 5.
The rgmanager packages contain the Red Hat Resource Group Manager, which is used to create and manage high-availability server applications in the event of system downtime.
Bug Fix
- BZ#1009245
- Previously, the cluster services file system failed over from one node to another if the /tmp directory filled up. A patch has been provided to fix this bug and cluster services no longer fail over.
Users of rgmanager are advised to upgrade to these updated packages, which fix this bug.