このコンテンツは選択した言語では利用できません。
Chapter 2. RHBA-2015:1848
The bugs contained in this chapter are addressed by advisory RHBA-2015:21398-06. Further information about this advisory is available at https://rhn.redhat.com/errata/ RHBA-2015:1848-06.html.
gluster-nagios-addons
- BZ#1196144
- Previously, the nrpe service did not reload when the gluster-nagios-addons rpm was updated. Due to this, the user had to restart/reload the nrpe service to monitor the hosts properly. With this fix, the nrpe service will be automatically reloaded when the gluster-nagios-addons rpm is updated.
nagios-server-addons
- BZ#1236290
- Previously, the nodes were updating the older service even after the Cluster Quorum service was renamed. Due to this, the Cluster Quorum service status in Nagios was not reflected. With this fix, the plugins on the nodes are updated so that the notifications are pushed to the new service and the Cluster Quorum status is reflected correctly.
- BZ#1235651
- Previously, the volume status service did not provide the status of disperse and distributed dispersed volumes. With this fix, the volume status service is modified to include the logic required for interpreting the volume status of disperse and distributed dispersed volumes and the volume status is now displayed correctly.
rhsc
- BZ#1250032
- Previously, the dashboard reported the status of all the interfaces causing the network interfaces status that are not in use to be reported as down. With this fix, only the status of interfaces that have an ip address assigned to it are displayed in the dashboard.
- BZ#1250024
- Previously, the size unit conversion was handled only upto TiB units and hence the units were not correctly displayed for petabyte storage size and above. With this fix, size unit conversion is updated to handle YiB (1024^8) and the dashboard displays the units correctly.
- BZ#1204924
- Previously, while calculating the network utilization the effective bond speed was not taken into consideration. Due to this, the network utilization displayed to the user when the network interfaces were bonded was incorrect. With this fix, the effective bond speed is taken into consideration and the network utilization is correctly displayed even when network interfaces are bonded.
- BZ#1225831
- Previously, data alignment value was ignored by the python-blivet module during pvcreate, due to which the physical volume(PV) was always created using 1024 data alignment size. VDSM now uses the lvm pvcreate command to fix this issue.
- BZ#1244902
- Previously, editing a host protocol from xml-rpc to json-rpc and then activating the host caused the host to become non-operational due to connectivity issues. This issue is now fixed.
- BZ#1224616
- Previously, the Trends tab UI Plugin did not send the 'Prefer' http header as part of every REST API calls. Due to this, the existing REST API session was invalidated whenever the user clicked the Trends tab and the user is prompted to provide the user name and password again.
- BZ#1230354
- Previously, a proper description for the geo-replication options was not displayed in the configuration option dialog. With this fix, the correct descriptions are displayed.
- BZ#1230348
- Previously, the storage devices were not synced to Red Hat Gluster Storage Console for a maximum of two hours when user adds the hosts. Due to this, the user had to sync the devices manually by clicking the 'Sync' button to view the storage devices after adding hosts to the console. With this fix, storage devices from the host will be synced automatically whenever the user activates, adds, or re-installs the host in the UI.
- BZ#1236696
- Previously, when a volume was restored to the state of one of its snapshots, the dashboard used to display brick delete alerts. This happened as part of snapshot restore where the existing bricks were removed and new bricks were added with a new mount point. The sync job generated an alert for this operation. With this fix, brick delete alerts are not generated after restoring the volume to the state of a snapshot.
- BZ#1234357
- Previously, as Red Hat Gluster Storage Console does not support cluster level option operations (set and reset), the user did not have a way to set cluster.enable-shared-storage volume option from the console. With this fix, this volume option is set automatically by the console when a new node is added to a volume that is participating as a master of a geo-replication session.
- BZ#1244714
- Previously, due to an issue in the code that took care of time zone wise conversion of execution time for volume snapshot schedule, the schedule execution time used to be off by 12 hours. For example, if the execution time is scheduled as 10:00 AM, it was set as 10:00 PM. With this fix, the time zone wise conversion logic of execution time for volume snapshot schedule is corrected.
- BZ#1244865
- Previously, when bricks were created using the UI, the xfs file system was created with the inode size set to 256 bytes rather than the recommended 512 bytes for disk types other than RAID6 and RAID10. This has now been fixed to use the recommended 512 bytes size.
- BZ#1240231
- Previously, if the gluster meta volume was deleted from the CLI and added back again, Red Hat Gluster Storage Console did not trigger the disabling of CLI based volume snapshot scheduling again. With this fix, the gluster sync job in the console is modified such that, even if the meta volume gets deleted and created back again, the console will explicitly disable the CLI based snapshot schedule.
rhsc-monitoring-uiplugin
- BZ#1230580
- Previously, when a brick in the dispersed volume was down, the status of the volume was displayed as partially available even though the volume was fully available. With this fix, the logic to handle the distributed disperse volume types is corrected and the dashboard now displays the status for dispersed and distributed disperse volume correctly.
vdsm
- BZ#1231722
- Previously, due to an issue with an exception handling in VDSM and Engine, using an existing mount point while creating a brick gave unexpected exception in the UI. With this fix, the correct error message is displayed when the given mount point is already in use.