Este contenido no está disponible en el idioma seleccionado.
Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference Overview
This document provides descriptions of the options and features that the Red Hat High Availability Add-On using Pacemaker supports. For a step by step basic configuration example, see Red Hat High Availability Add-On Administration.
You can configure a Red Hat High Availability Add-On cluster with the
pcs
configuration interface or with the pcsd
GUI interface.
1.1. New and Changed Features
This section lists features of the Red Hat High Availability Add-On that are new since the initial release of Red Hat Enterprise Linux 7.
1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1
Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes.
- The
pcs resource cleanup
command can now reset the resource status andfailcount
for all resources, as documented in Section 6.11, “Cluster Resources Cleanup”. - You can specify a
lifetime
parameter for thepcs resource move
command, as documented in Section 8.1, “Manually Moving Resources Around the Cluster”. - As of Red Hat Enterprise Linux 7.1, you can use the
pcs acl
command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). For information on ACLs, see Section 4.5, “Setting User Permissions”. - Section 7.2.3, “Ordered Resource Sets” and Section 7.3, “Colocation of Resources” have been extensively updated and clarified.
- Section 6.1, “Resource Creation” documents the
disabled
parameter of thepcs resource create
command, to indicate that the resource being created is not started automatically. - Section 10.1, “Configuring Quorum Options” documents the new
cluster quorum unblock
feature, which prevents the cluster from waiting for all nodes when establishing quorum. - Section 6.1, “Resource Creation” documents the
before
andafter
parameters of thepcs resource create
command, which can be used to configure resource group ordering. - As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the
backup
andrestore
options of thepcs config
command. For information on this feature, see Section 3.8, “Backing Up and Restoring a Cluster Configuration”. - Small clarifications have been made throughout this document.
1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2
Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes.
- You can now use the
pcs resource relocate run
command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, “Moving a Resource to its Preferred Node”. - Section 13.2, “Event Notification with Monitoring Resources” has been modified and expanded to better document how to configure the
ClusterMon
resource to execute an external program to determine what to do with cluster notifications. - When configuring fencing for redundant power supplies, you now are only required to define each device once and to specify that both devices are required to fence the node. For information on configuring fencing for redundant power supplies, see Section 5.10, “Configuring Fencing for Redundant Power Supplies”.
- This document now provides a procedure for adding a node to an existing cluster in Section 4.4.3, “Adding Cluster Nodes”.
- The new
resource-discovery
location constraint option allows you to indicate whether Pacemaker should perform resource discovery on a node for a specified resource, as documented in Table 7.1, “Simple Location Constraint Options”. - Small clarifications and corrections have been made throughout this document.
1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3
Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes.
- Section 9.4, “The pacemaker_remote Service”, has been wholly rewritten for this version of the document.
- You can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. Pacemaker alert agents are described in Section 13.1, “Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later)”.
- New quorum administration commands are supported with this release which allow you to display the quorum status and to change the
expected_votes
parameter. These commands are described in Section 10.2, “Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later)”. - You can now modify general quorum options for your cluster with the
pcs quorum update
command, as described in Section 10.3, “Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later)”. - You can configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. This feature is provided for technical preview only. For information on quorum devices, see Section 10.5, “Quorum Devices”.
- Red Hat Enterprise Linux release 7.3 provides the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. This feature is provided for technical preview only. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker.
- When configuring a KVM guest node running a the
pacemaker_remote
service, you can include guest nodes in groups, which allows you to group a storage device, file system, and VM. For information on configuring KVM guest nodes, see Section 9.4.5, “Configuration Overview: KVM Guest Node”.
Additionally, small clarifications and corrections have been made throughout this document.
1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4
Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes.
- Red Hat Enterprise Linux release 7.4 provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker.
- Red Hat Enterprise Linux 7.4 provides full support for the ability to configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. For information on quorum devices, see Section 10.5, “Quorum Devices”.
- You can now specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For information on configuring fencing levels, see Section 5.9, “Configuring Fencing Levels”.
- Red Hat Enterprise Linux 7.4 supports the
NodeUtilization
resource agent, which can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. For information on this resource agent, see Section 9.6.5, “The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later)”. - For Red Hat Enterprise Linux 7.4, the
cluster node add-guest
and thecluster node remove-guest
commands replace thecluster remote-node add
andcluster remote-node remove
commands. Thepcs cluster node add-guest
command sets up theauthkey
for guest nodes and thepcs cluster node add-remote
command sets up theauthkey
for remote nodes. For updated guest and remote node configuration procedures, see Section 9.3, “Configuring a Virtual Domain as a Resource”. - Red Hat Enterprise Linux 7.4 supports the
systemd
resource-agents-deps
target. This allows you to configure the appropriate startup order for a cluster that includes resources with dependencies that are not themselves managed by the cluster, as described in Section 9.7, “Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later)”. - The format for the command to create a resource as a master/slave clone has changed for this release. For information on creating a master/slave clone, see Section 9.2, “Multistate Resources: Resources That Have Multiple Modes”.
1.1.5. New and Changed Features for Red Hat Enterprise Linux 7.5
Red Hat Enterprise Linux 7.5 includes the following documentation and feature updates and changes.
- As of Red Hat Enterprise Linux 7.5, you can use the
pcs_snmp_agent
daemon to query a Pacemaker cluster for data by means of SNMP. For information on querying a cluster with SNMP, see Section 9.8, “Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)”.
1.1.6. New and Changed Features for Red Hat Enterprise Linux 7.8
Red Hat Enterprise Linux 7.8 includes the following documentation and feature updates and changes.
- As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node’s resources to fail over to other nodes in the cluster. For information on configuring resources to remain stopped on clean node shutdown, see Section 9.9, “ Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) ”.