Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Clustering
clufter rebased to version 0.76.0 and fully supported
The clufter packages provide a tool for transforming and analyzing cluster configuration formats. They can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. The
clufter
tool, previously available as a Technology Preview, is now fully supported. For information on the capabilities of clufter
, see the clufter(1)
man page or the output of the clufter -h
command. For examples of clufter
usage, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2810031.
The clufter packages have been upgraded to upstream version 0.76.0, which provides a number of bug fixes and new features. Among the notable updates are the following:
- When converting either CMAN + RGManager stack specific configuration into the respective Pacemaker configuration (or sequence of
pcs
commands) with theccs2pcs*
families of commands, theclufter
tool no longer refuses to convert entirely valid lvm resource agent configuration. - When converting CMAN-based configuration into the analogous configuration for a Pacemaker stack with the
ccs2pcs
family of commands, some resources related configuration bits previously lost in processing (such as maximum number of failures before returning a failure to a status check) are now propagated correctly. - When producing
pcs
commands with thecib2pcs
andpcs2pcscmd
families ofclufter
commands, proper finalized syntax is now used for the alert handlers definitions, for which the (default) behavior of single-step push of the configuration changes is now respected. - When producing
pcs
commands, theclufter
tool now supports a preferred ability to generatepcs
commands that will update only the modifications made to a configuration by means of a differential update rather than a pushing a wholesale update of the entire configuration. Likewise when applicable, theclufter
tool now supports instructing thepcs
tool to configure user permissions (ACLs). For this to work across the instances of various major versions of the document schemas,clufter
gained the notion of internal on-demand format upgrades, mirroring the internal mechanics ofpacemaker
. Similarly,clufter
is now capable of configuring thebundle
feature. - In any script-like output sequence such as that produced by the
ccs2pcscmd
andpcs2pcscmd
families ofclufter
commands, the intended shell interpreter is now emitted as a first, commented line as also understood directly by the operating system in order to clarify where Bash rather than a mere POSIX shell is expected. This might have been misleading under some circumstances in the past. - The Bash completion file for
clufter
no longer fails to work properly when the=
character is to specify an option's value in the sequence being completed.
Support for quorum devices in a Pacemaker cluster
Red Hat Enterprise Linux 7.4 provides full support for quorum devices, previously available as a Technology Preview. This feature provides the ability to configure a separate quorum device (QDevice) which acts as a third-party arbitration device for the cluster. Its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. A quorum device is recommended for clusters with an even number of nodes and highly recommended for two-node clusters. For information on configuring a quorum device, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/. (BZ#1158805)
Support for Booth cluster ticket manager
Red Hat Enterprise Linux 7.4 provides full support for a Booth cluster ticket manager. This feature, previously available as a Technology Preview, allows you to configure multiple high availability clusters in separate sites that communicate through a distributed service to coordinate management of resources. The Booth ticket manager facilitates a consensus-based decision process for individual tickets that ensure that specified resources are run at only one site at a time, for which a ticket has been granted. For information on configuring multi-site clusters with the Booth ticket manager, see the https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ (BZ#1302087, BZ#1305049)
Support added for using shared storage with the SBD daemon
Red Hat Enterprise Linux 7.4 provides support for using the SBD (Storage-Based Death) daemon with a shared block device. This allows you to enable fencing by means of a shared block-device in addition to fencing by means of a watchdog device, which had previously been supported. The
fence-agents
package now provides the fence_sbd
fence agent which is needed to trigger and control the actual fencing by means of an RHCS-style fence agent. SBD is not supported on Pacemaker remote nodes. (BZ#1413951)
Full support for CTDB resource agent
The CTDB resource agent used to implement a Samba deployment is now supported in Red Hat Enterprise Linux. (BZ#1077888)
The High Availability and Resilient Storage Add-Ons are now available for IBM POWER, little endian
Red Hat Enterprise Linux 7.4 adds support for the High Availability and Resilient Storage Add-Ons for the IBM POWER, little endian architecture. Note that this support is provided only for cluster nodes running on PowerVM on POWER8 servers. (BZ#1289662, BZ#1426651)
pcs now provides the ability to set up a cluster with encrypted corosync communication
The
pcs cluster setup
command now supports a new --encryption
flag that controls the setting of corosync encryption in a cluster. This allows users to set up a cluster with encrypted corosync communication in a not entirely trusted environment. (BZ#1165821)
New commands for supporting and removing remote and guest nodes
Red Hat Enterprise Linux 7.4 provides the following new commands for creating and removing remote and guest nodes:
- pcs cluster node add-guest
- pcs cluster node remove-guest
- pcs cluster node add-remote
- pcs cluster node remove-remote
These commands replace the
pcs cluster remote-node add
and pcs cluster remote-node remove
commands, which have been deprecated. (BZ#1176018, BZ#1386512)
Ability to configure pcsd bind addresses
You can now configure
pcsd
bind addresses in the /etc/sysconfig/pcsd
file. In previous releases, pcsd
could bind to all interfaces, a situation that is not suitable for some users. By default, pcsd
binds to all interfaces. (BZ#1373614)
New option to the pcs resource unmanage
command to disable monitor operations
Even when a resource is in unmanaged mode, monitor operations are still run by the cluster. That may cause the cluster to report errors the user is not interested in as those errors may be expected for a particular use case when the resource is unmanaged. The
pcs resource unmanage
command now supports the --monitor
option, which disables monitor operations when putting a resource into unmanaged mode. Additionally, the pcs resource manage
command also supports the --monitor
option, which enables the monitor operations when putting a resource back into managed mode. (BZ#1303969)
Support for regular expressions in pcs
command line when configuring location constraints
pcs
now supports regular expressions in location constraints on the command line. These constraints apply to multiple resources based on the regular expression matching resource name. This simplifies cluster management as one constraint may be used where several were needed before. (BZ#1362493)
Specifying nodes in fencing topology by a regular expression or a node attribute and its value
It is now possible to specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value.
For example, the following commands configure nodes
node1
, node2
, and node3
to use fence devices apc1
and apc2
, and nodes node4
, node5
, and node6
to use fence devices apc3
and apc4
.
pcs stonith level add 1 "regexp%node[1-3]" apc1,apc2 pcs stonith level add 1 "regexp%node[4-6]" apc3,apc4
The following commands yield the same results by using node attribute matching.
pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4
(BZ#1261116)
Support for Oracle 11g for the resource agents Oracle
and OraLsnr
Red Hat Enterprise Linux 7.4 provides support for Oracle Database 11g for the
Oracle
and OraLsnr
resource agents used with Pacemaker. (BZ#1336847)
Support for using SBD with shared storage
Support has been added for configured SBD (Storage-Based Death) with shared storage using the
pcs
commands. For information on SBD fending, see https://access.redhat.com/articles/2943361. (BZ#1413958)
Support for NodeUtilization resource agent
Red Hat Enterprise Linux 7.4 supports the
NodeUtilization
resource agent. The NodeUtilization
agent can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. You can run the agent as a clone resource to have it automatically populate these parameters on each node. For information on the NodeUtilization
resource agent and the resource options for this agent, run the pcs resource describe NodeUtilization
command. For information on utilization and placement strategy in Pacemaker, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/s1-utilization-HAAR.html. (BZ#1430304)