Chapter 4. RHBA-2015:0039
The bugs contained in this chapter are addressed by advisory RHBA-2015:0039. Further information about this advisory is available at https://rhn.redhat.com/errata/RHBA-2015-0039.html
gluster-nagios-addons
- BZ#1136205
- Previously, the Nagios plug-in sent the volume status request to the Red Hat Storage node without converting the Nagios host name to the respective IP Address. When the
glusterd
service was stopped on one of the nodes in a Red Hat Storage Trusted Storage Pool, the volume status displayed a warning and the status information was empty. With this fix, the error scenarios are handled properly and the system ensures that theglusterd
service starts before it sends such a request to a Red Hat Storage node. - BZ#1109727
- Previously, when one of the bricks in a replica pair was down in a replicate volume type, the status of the Geo-replication session was set to FAULTY. This resulted in the status of the Nagios plug-in to be set to CRITICAL. With this fix, changes are made to ensure that if only one of bricks in a replica pair is down, the status of the Geo-replication session is set to PARTIAL FAULTY as the Geo-replication session is active on another Red Hat Storage node, in such a scenario.
- BZ#1109752
- Previously, the Geo-replication status plug-in displayed a Warning state when the Red Hat Storage volume was locked due to another volume operation. With this fix, when a volume is locked, the command is executed again after a wait time. If the error message persists, the status plug-in displays the state as unknown.
- BZ#1141171
- Previously, the status of the quorum service displayed an incorrect status. With this fix, a buffering issue is fixed and the quorum service displays the appropriate status.
- BZ#1143995
- Previously, when a brick was created from a thin-provisioned volume, the brick utilization would not display the actual brick utilization of the thin pool. With this fix, bricks with thin-logical volume display both the thin-logical volume utilization and the actual thin pool utilization.
- BZ#1109702
- Previously, even after a volume was deleted, the volume information continued to appear in the output of the
Cluster-quorum
service plug-in. The plug-in retains the information of the volume which lost the quorum and updates it only when the quorum is either lost or regained. With this fix, the stale information in the output is removed and the plug-in output is displayed appropriately. As a result, the information about deleted volumes is not present in plug-in output. - BZ#1120832
- Previously, when the value for the
hostname_in_nagios
parameter was not configured in the/etc/nagios/nagios_server.conf
file, the corresponding log message that was recorded, was unclear. With this fix, a clear message is displayed. - BZ#1105568
- Previously, the status message for CTDB, NFS, Quota, SMB, and Self Heal services were not clearly defined in the Nagios Remote Plug-in Executor. With this fix, the plug-in for these services return the correct error message and when the
glusterd
service is offline, clear values are displayed for Status and Status Information fields. - BZ#1109723
- Previously, the
Auto-config
service would not work if theglusterd
service was offline in any of the nodes in the Red Hat Storage trusted storage pool. With this fix, the Auto-config service works even if theglusterd
service is down in some of the nodes in the trusted storage pool provided that theglusterd
service is running in the node which is used as sync host in the auto-config service.
nagios-server-addons
- BZ#1128007
- Previously, when all the nodes in a Red Hat Storage trusted storage pool were offline, all the volumes were moved to an
UNKNOWN
state and the cluster status was displayed as UP with messageOK:None of the volumes are in critical state
. With this fix, changes are made to consider all the status of volumes while computing the status of the Red Hat Storage trusted storage pool. - BZ#1109843
- Previously, if the host that is used for discovery was detached from the Red Hat Storage trusted storage pool, then all the hosts would get removed from the Nagios configuration when an auto-discovery was performed. With this fix, the
auto-config
service does not remove any configuration detail if the host used for discovery is detached from the Red Hat Storage trusted storage pool. - BZ#1119233
- Previously, the graph for cluster utilization did not display values in percentage on the Y-axis. This happened because the plug-in used the default template where the scale value of the graph was not fixed. With this fix, a specific template is implemented for the Nagios plug-in.
- BZ#1139228
- Previously, if the host that was used for discovery was detached from the Red Hat Storage Trusted Storage Pool, then all the hosts would get removed from the Nagios configuration when auto-discovery was performed. With this fix, the
auto config
service does not remove the configurations and it works as expected. - BZ#1138943
- Previously, the
auto-config
service tried to restart the Nagios service though there was a configuration error. As a result, auto-config service reported a message:restarted nagios successfully
, though the Nagios service was not running. With this fix, changes are made to check the configuration before restarting Nagios service.
rhsc
- BZ#1112183
- Previously, users could select a starting date later than end date in the Trends tab of the Red Hat Storage Console. With this fix, a validation is performed and an appropriate alert message is displayed.
- BZ#1152877
- Previously, when a host had multiple network addresses, the system failed to identify the brick correctly from the output of
gluster volume status
command. As a result, the brick status appeared to be offline after a node restart, though the bricks were online. With this fix, changes are made to ensure that the brick statuses are displayed appropriately. - BZ#1138143
- Previously, users could view only a few of the utilization graphs in the Trends tab of the Red Hat Storage Console. To view service based information, users had to navigate to the Nagios Web UI and there was no such link provided on the Red Hat Storage Console. With this release, a link is added to help the user navigate to the Nagios web UI from the Trends tab when monitoring is enabled.
- BZ#1138108
- Previously, the
glusterpmd
service needed to be manually started in the Red Hat Storage node after adding the node to the Red Hat Storage Console. With this fix, theglusterpmd
service works as expected. To fix this issue, after updating Red Hat Storage Console and the Red Hat Storage nodes to version 3.0.3, you must reinstall the Red Hat Storage nodes that were previously added to the Red Hat Storage Console. - BZ#1111087
- Previously, there was no mechanism to enable the monitoring feature after disabling it. With this fix, the user can enable monitoring by executing
rhsc-monitoring enable
command from the command line interface. - BZ#1111079
- Previously, the Red Hat Storage Console installed Nagios and enabled monitoring by default. After the installation, if the user disabled the monitoring feature, the Nagios server would not stop running on the Red Hat Storage Console node. With this fix, to disable the monitoring feature, execute the
rhsc-monitoring disable
command on the command line interface. This would stop the Nagios Server and Nagios Service Check Acceptor (NSCA) server. - BZ#1106459
- Previously, an error was displayed when moving a Red Hat Storage node from one Red Hat Storage Trusted Storage Pool to another. With this fix, checks that inhibits such movements are removed.
- BZ#1057574
- Previously, the add host operation using the SSH public key by following the Guide Me link failed. This happened due to an incorrect authentication method being set. With this fix, hosts can be added successfully using the SSH public key.