此内容没有您所选择的语言版本。
Chapter 5. Clustering
Pacemaker does not update the fail count when on-fail=ignore
is used
When a resource in a Pacemaker cluster failed to start, Pacemaker updated the resource's last failure time and fail count, even if the
on-fail=ignore
option was used. This could cause unwanted resource migrations. Now, Pacemaker does not update the fail count when on-fail=ignore
is used. As a result, the failure is displayed in the cluster status output, but is properly ignored and thus does not cause resource migration. (BZ#1200853)
pacemaker and other Corosync clients again connect successfully
Previously, the libqb library had a limited buffer size when building names for IPC sockets. If the process IDs on the system exceeded 5 digits, they were truncated and the IPC socket names could become non-unique. As a consequence, clients of the Corosync cluster manager could fail to connect and could exit, assuming the cluster services were unavailable. This could include pacemaker which could fail, leaving no cluster services running. This update increases the buffer size used for building IPC socket names to cover the maximum possible process ID number. As a result, pacemaker and other Corosync clients start consistently and continue running regardless of the process ID size. (BZ#1276345)
Security features added to the luci interface to prevent clickjacking
Previously,
luci
was not defended against clickjacking, a technique to attack a web site in which a user is tricked into performing unintended or malicious actions through purposefully injected elements on top of the genuine web page. To guard against this type of attack, luci
is now served with X-Frame-Options: DENY
and Content-Security-Policy: frame-ancestors 'none'
headers that are intended to prevent luci
pages from being contained within external, possibly malicious, web pages. Additionally, when a user configures luci
to use a custom certificate and is properly anchored with a recognized CA certificate, a Strict-Transport-Security
mechanism with a validity period of 7 days is enforced in newer web browsers, also by means of a dedicated HTTP header. These new static HTTP headers can be deactivated, should it be necessary to overcome incompatibilites, and a user can add custom static HTTP headers in the /etc/sysconfig/luci
file, which provides examples. (BZ#1270958)
glusterfs
can now properly recover from failed synchronization of cached writes to backend
Previously, if synchronization of cached writes to a Gluster backend failed due to a lack of space, write-behind marked the file descriptor (
fd
) as bad. This meant virtual machines could not recover and could not be restarted after synchronization to backend failed for any reason.
With this update,
glusterfs
retries synchronization to backend on error until synchronization succeeds until a flush. Additionally, file descriptors are not marked as bad in this scenario, and only operations overlapping with regions with failed synchronizations fail until the synchronization is successful. Virtual machines can therefore be resumed normally once the underlying error condition is fixed and synchronization to backend succeeds. (BZ#1171261)
Fixed an AVC denial error when setting up Gluster
storage on NFS Ganesha clusters
Attempting to set up Gluster storage on an NFS-Ganesha cluster previously failed due to an Access Vector Cache (AVC) denial error. The responsible SELinux policy has been adjusted to allow handling of volumes mounted by NFS-Ganesha, and the described failure no longer occurs. (BZ#1241386)
Installing glusterfs no longer affects default logrotate
settings
When installing the glusterfs packages on Red Hat Enterprise Linux 6, the
glusterfs-logrotate
and glusterfs-georep-logrotate
files were previously installed with several global logrotate
options. Consequently, the global options affected the default settings in the /etc/logrotate.conf
file. The glusterfs RPMs have been rebuilt to prevent the default settings from being overridden. As a result, global settings in /etc/logrotate.conf
continue to function as configured without being overridden by settings from glusterfs
logrotate
files. (BZ#1171865)
Fence agent for DM Multipath no longer loses SCSI keys on non-cluster reboot
Previously, the fence agent for DM Multipath lost SCSI keys when the node was not rebooted using cluster methods. This resulted in an error when the cluster tried to fence the node. With this update, keys are properly regenerated after each reboot in this situation. (BZ#1254183)
Fence agent for HP Integrated Lights-Out (iLo) now uses TLS1.0 automatically when connection over SSL v3 fails
Previously, the fence agent for HP Integrated Lights-Out (iLO) required the tls1.0 argument in order to use TLS1.0 instead of SSL v3. With this update, TLS1.0 is used automatically when the connection over SSL v3 fails. (BZ#1256902)