Este contenido no está disponible en el idioma seleccionado.
Chapter 11. Configuring a high-availability cluster by using RHEL system roles
With the ha_cluster system role, you can configure and manage a high-availability cluster that uses the Pacemaker high availability cluster resource manager.
11.1. Variables of the ha_cluster RHEL system role Copiar enlaceEnlace copiado en el portapapeles!
In an ha_cluster RHEL system role playbook, you define the variables for a high availability cluster according to the requirements of your cluster deployment.
The variables you can set for an ha_cluster RHEL system role are as follows:
ha_cluster_enable_repos-
A boolean flag that enables the repositories containing the packages that are needed by the
ha_clusterRHEL system role. When this variable is set totrue, the default value, you must have active subscription coverage for RHEL and the RHEL High Availability Add-On on the systems that you will use as your cluster members or the system role will fail. ha_cluster_enable_repos_resilient_storage-
(RHEL 8.10 and later) A boolean flag that enables the repositories containing resilient storage packages, such as
dlmorgfs2. For this option to take effect,ha_cluster_enable_reposmust be set totrue. The default value of this variable isfalse. ha_cluster_manage_firewall(RHEL 8.8 and later) A boolean flag that determines whether the
ha_clusterRHEL system role manages the firewall. Whenha_cluster_manage_firewallis set totrue, the firewall high availability service and thefence-virtport are enabled. Whenha_cluster_manage_firewallis set tofalse, theha_clusterRHEL system role does not manage the firewall. If your system is running thefirewalldservice, you must set the parameter totruein your playbook.You can use the
ha_cluster_manage_firewallparameter to add ports, but you cannot use the parameter to remove ports. To remove ports, use thefirewallsystem role directly.In RHEL 8.8 and later, the firewall is no longer configured by default, because it is configured only when
ha_cluster_manage_firewallis set totrue.ha_cluster_manage_selinux(RHEL 8.8 and later) A boolean flag that determines whether the
ha_clusterRHEL system role manages the ports belonging to the firewall high availability service using theselinuxRHEL system role. Whenha_cluster_manage_selinuxis set totrue, the ports belonging to the firewall high availability service are associated with the SELinux port typecluster_port_t. Whenha_cluster_manage_selinuxis set tofalse, theha_clusterRHEL system role does not manage SELinux.If your system is running the
selinuxservice, you must set this parameter totruein your playbook. Firewall configuration is a prerequisite for managing SELinux. If the firewall is not installed, the managing SELinux policy is skipped.You can use the
ha_cluster_manage_selinuxparameter to add policy, but you cannot use the parameter to remove policy. To remove policy, use theselinuxRHEL system role directly.ha_cluster_cluster_presentA boolean flag which, if set to
true, determines that HA cluster will be configured on the hosts according to the variables passed to the role. Any cluster configuration not specified in the playbook and not supported by the role will be lost.If
ha_cluster_cluster_presentis set tofalse, all HA cluster configuration will be removed from the target hosts.The default value of this variable is
true.The following example playbook removes all cluster configuration on
node1andnode2Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_start_on_boot-
A boolean flag that determines whether cluster services will be configured to start on boot. The default value of this variable is
true. ha_cluster_fence_agent_packages-
List of fence agent packages to install. The default value of this variable is
fence-agents-all,fence-virt. ha_cluster_extra_packagesList of additional packages to be installed. The default value of this variable is no packages.
This variable can be used to install additional packages not installed automatically by the role, for example custom resource agents.
It is possible to specify fence agents as members of this list. However,
ha_cluster_fence_agent_packagesis the recommended role variable to use for specifying fence agents, so that its default value is overridden.ha_cluster_hacluster_password-
A string value that specifies the password of the
haclusteruser. Thehaclusteruser has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault. There is no default password value, and this variable must be specified. ha_cluster_hacluster_qdevice_password-
(RHEL 8.9 and later) A string value that specifies the password of the
haclusteruser for a quorum device. This parameter is needed only if theha_cluster_quorumparameter is configured to use a quorum device of typenetand the password of thehaclusteruser on the quorum device is different from the password of thehaclusteruser specified with theha_cluster_hacluster_passwordparameter. Thehaclusteruser has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault. There is no default value for this password. ha_cluster_corosync_key_srcThe path to Corosync
authkeyfile, which is the authentication and encryption key for Corosync communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pacemaker_key_srcThe path to the Pacemaker
authkeyfile, which is the authentication and encryption key for Pacemaker communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_fence_virt_key_srcThe path to the
fence-virtorfence-xvmpre-shared key file, which is the location of the authentication key for thefence-virtorfence-xvmfence agent.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If the
ha_clusterRHEL system role generates a new key in this fashion, you should copy the key to your nodes' hypervisor to ensure that fencing works.If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pcsd_public_key_srcr,ha_cluster_pcsd_private_key_srcThe path to the
pcsdTLS certificate and private key. If this is not specified, a certificate-key pair already present on the nodes will be used. If a certificate-key pair is not present, a random new one will be generated.If you specify a private key value for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If these variables are set,
ha_cluster_regenerate_keysis ignored for this certificate-key pair.The default value of these variables is null.
ha_cluster_pcsd_certificates(RHEL 8.8 and later) Creates a
pcsdprivate key and certificate using thecertificateRHEL system role.If your system is not configured with a
pcsdprivate key and certificate, you can create them in one of two ways:-
Set the
ha_cluster_pcsd_certificatesvariable. When you set theha_cluster_pcsd_certificatesvariable, thecertificateRHEL system role is used internally and it creates the private key and certificate forpcsdas defined. -
Do not set the
ha_cluster_pcsd_public_key_src,ha_cluster_pcsd_private_key_src, or theha_cluster_pcsd_certificatesvariables. If you do not set any of these variables, theha_clusterRHEL system role will createpcsdcertificates by means ofpcsditself. The value ofha_cluster_pcsd_certificatesis set to the value of the variablecertificate_requestsas specified in thecertificateRHEL system role. For more information about thecertificateRHEL system role, see Requesting certificates using RHEL system roles.
-
Set the
The following operational considerations apply to the use of the
ha_cluster_pcsd_certificatevariable:-
Unless you are using IPA and joining the systems to an IPA domain, the
certificateRHEL system role creates self-signed certificates. In this case, you must explicitly configure trust settings outside of the context of RHEL system roles. System roles do not support configuring trust settings. -
When you set the
ha_cluster_pcsd_certificatesvariable, do not set theha_cluster_pcsd_public_key_srcandha_cluster_pcsd_private_key_srcvariables. -
When you set the
ha_cluster_pcsd_certificatesvariable,ha_cluster_regenerate_keysis ignored for this certificate - key pair.
-
Unless you are using IPA and joining the systems to an IPA domain, the
The default value of this variable is
[].For an example
ha_clusterRHEL system role playbook that creates TLS certificates and key files in a high availability cluster, see Creating pcsd TLS certificates and key files for a high availability cluster.ha_cluster_regenerate_keys-
A boolean flag which, when set to
true, determines that pre-shared keys and TLS certificates will be regenerated. For more information about when keys and certificates will be regenerated, see the descriptions of theha_cluster_corosync_key_src,ha_cluster_pacemaker_key_src,ha_cluster_fence_virt_key_src,ha_cluster_pcsd_public_key_src, andha_cluster_pcsd_private_key_srcvariables. -
The default value of this variable is
false. ha_cluster_pcs_permission_listConfigures permissions to manage a cluster using
pcsd. The items you configure with this variable are as follows:-
type-userorgroup -
name- user or group name allow_list- Allowed actions for the specified user or group:-
read- View cluster status and settings -
write- Modify cluster settings except permissions and ACLs -
grant- Modify cluster permissions and ACLs -
full- Unrestricted access to a cluster including adding and removing nodes and access to keys and certificates
-
-
The structure of the
ha_cluster_pcs_permission_listvariable and its default values are as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name-
The name of the cluster. This is a string value with a default of
my-cluster. ha_cluster_transport(RHEL 8.7 and later) Sets the cluster transport method. The items you configure with this variable are as follows:
-
type(optional) - Transport type:knet,udp, orudpu. Theudpandudputransport types support only one link. Encryption is always disabled forudpandudpu. Defaults toknetif not specified. -
options(optional) - List of name-value dictionaries with transport options. -
links(optional) - List of list of name-value dictionaries. Each list of name-value dictionaries holds options for one Corosync link. It is recommended that you set thelinknumbervalue for each link. Otherwise, the first list of dictionaries is assigned by default to the first link, the second one to the second link, and so on. -
compression(optional) - List of name-value dictionaries configuring transport compression. Supported only with theknettransport type. -
crypto(optional) - List of name-value dictionaries configuring transport encryption. By default, encryption is enabled. Supported only with theknettransport type.
-
For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For more detailed descriptions, see thecorosync.conf(5) man page.The structure of the
ha_cluster_transportvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures a transport method, see Configuring Corosync values in a high availability cluster.ha_cluster_totem(RHEL 8.7 and later) Configures Corosync totem. For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For a more detailed description, see thecorosync.conf(5) man page.The structure of the
ha_cluster_totemvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures a Corosync totem, see Configuring Corosync values in a high availability cluster.ha_cluster_quorum(RHEL 8.7 and later) Configures cluster quorum. You can configure the following items for cluster quorum:
-
options(optional) - List of name-value dictionaries configuring quorum. Allowed options are:auto_tie_breaker,last_man_standing,last_man_standing_window, andwait_for_all. For information about quorum options, see thevotequorum(5) man page. device(optional) - (RHEL 8.8 and later) Configures the cluster to use a quorum device. By default, no quorum device is used.-
model(mandatory) - Specifies a quorum device model. Onlynetis supported model_options(optional) - List of name-value dictionaries configuring the specified quorum device model. For modelnet, you must specifyhostandalgorithmoptions.Use the
pcs-addressoption to set a custompcsdaddress and port to connect to theqnetdhost. If you do not specify this option, the role connects to the defaultpcsdport on thehost.-
generic_options(optional) - List of name-value dictionaries setting quorum device options that are not model specific. heuristics_options(optional) - List of name-value dictionaries configuring quorum device heuristics.For information about quorum device options, see the
corosync-qdevice(8) man page. The generic options aresync_timeoutandtimeout. For modelnetoptions see thequorum.device.netsection. For heuristics options, see thequorum.device.heuristicssection.To regenerate a quorum device TLS certificate, set the
ha_cluster_regenerate_keysvariable totrue.
-
-
The structure of the
ha_cluster_quorumvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures cluster quorum, see Configuring Corosync values in a high availability cluster. For an exampleha_clusterRHEL system role playbook that configures a cluster using a quorum device, see Configuring a high availability cluster using a quorum device.ha_cluster_sbd_enabled(RHEL 8.7 and later) A boolean flag which determines whether the cluster can use the SBD node fencing mechanism. The default value of this variable is
false.For an example
ha_clustersystem role playbook that enables SBD, see Configuring a high availability cluster with SBD node fencing.ha_cluster_sbd_options(RHEL 8.7 and later) List of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page.Supported options are:
-
delay-start- defaults tofalse, documented asSBD_DELAY_START -
startmode- defaults toalways, documented asSBD_START_MODE -
timeout-action- defaults toflush,reboot, documented asSBD_TIMEOUT_ACTION -
watchdog-timeout- defaults to5, documented asSBD_WATCHDOG_TIMEOUT
-
For an example
ha_clustersystem role playbook that configures SBD options, see Configuring a high availability cluster with SBD node fencing.When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. For information about configuring watchdog and SBD devices in an inventory file, see Specifying an inventory for the ha_cluster system role.
ha_cluster_cluster_propertiesList of sets of cluster properties for Pacemaker cluster-wide configuration. Only one set of cluster properties is supported.
The structure of a set of cluster properties is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no properties are set.
The following example playbook configures a cluster consisting of
node1andnode2and sets thestonith-enabledandno-quorum-policycluster properties.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_node_options(RHEL 8. 10 and later) This variable defines settings which vary from one cluster node to another. It sets the options for the specified nodes, but does not specify which nodes form the cluster. You specify which nodes form the cluster with the
hostsparameter in an inventory or a playbook.The items you configure with this variable are as follows:
-
node_name(mandatory) - Name of the node for which to define Pacemaker node attributes. It must match a name defined for a node. -
attributes(optional) - List of sets of Pacemaker node attributes for the node. Currently, only one set is supported. The first set is used and the rest are ignored.
-
The structure of the
ha_cluster_node_optionsvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no node options are defined.
For an example
ha_clusterRHEL system role playbook that includes node options configuration, see Configuring a high availability cluster with node attributes.ha_cluster_resource_primitivesThis variable defines pacemaker resources configured by the RHEL system role, including fencing resources. You can configure the following items for each resource:
-
id(mandatory) - ID of a resource. -
agent(mandatory) - Name of a resource or fencing agent, for exampleocf:pacemaker:Dummyorstonith:fence_xvm. It is mandatory to specifystonith:for STONITH agents. For resource agents, it is possible to use a short name, such asDummy, instead ofocf:pacemaker:Dummy. However, if several agents with the same short name are installed, the role will fail as it will be unable to decide which agent should be used. Therefore, it is recommended that you use full names when specifying a resource agent. -
instance_attrs(optional) - List of sets of the resource’s instance attributes. Currently, only one set is supported. The exact names and values of attributes, as well as whether they are mandatory or not, depend on the resource or fencing agent. -
meta_attrs(optional) - List of sets of the resource’s meta attributes. Currently, only one set is supported. -
copy_operations_from_agent(optional) - (RHEL 8.9 and later) Resource agents usually define default settings for resource operations, such asintervalandtimeout, optimized for the specific agent. If this variable is set totrue, then those settings are copied to the resource configuration. Otherwise, clusterwide defaults apply to the resource. If you also define resource operation defaults for the resource with theha_cluster_resource_operation_defaultsrole variable, you can set this tofalse. The default value of this variable istrue. operations(optional) - List of the resource’s operations.-
action(mandatory) - Operation action as defined by pacemaker and the resource or fencing agent. -
attrs(mandatory) - Operation options, at least one option must be specified.
-
-
The structure of the resource definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resources are defined.
For an example
ha_clusterRHEL system role playbook that includes resource configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_groupsThis variable defines pacemaker resource groups configured by the system role. You can configure the following items for each resource group:
-
id(mandatory) - ID of a group. -
resources(mandatory) - List of the group’s resources. Each resource is referenced by its ID and the resources must be defined in theha_cluster_resource_primitivesvariable. At least one resource must be listed. -
meta_attrs(optional) - List of sets of the group’s meta attributes. Currently, only one set is supported.
-
The structure of the resource group definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource groups are defined.
For an example
ha_clusterRHEL system role playbook that includes resource group configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_clonesThis variable defines pacemaker resource clones configured by the system role. You can configure the following items for a resource clone:
-
resource_id(mandatory) - Resource to be cloned. The resource must be defined in theha_cluster_resource_primitivesvariable or theha_cluster_resource_groupsvariable. -
promotable(optional) - Indicates whether the resource clone to be created is a promotable clone, indicated astrueorfalse. -
id(optional) - Custom ID of the clone. If no ID is specified, it will be generated. A warning will be displayed if this option is not supported by the cluster. -
meta_attrs(optional) - List of sets of the clone’s meta attributes. Currently, only one set is supported.
-
The structure of the resource clone definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource clones are defined.
For an example
ha_clusterRHEL system role playbook that includes resource clone configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_defaults(RHEL 8.9 and later) This variable defines sets of resource defaults. You can define multiple sets of defaults and apply them to resources of specific agents using rules. The defaults you specify with the
ha_cluster_resource_defaultsvariable do not apply to resources which override them with their own defined values.Only meta attributes can be specified as defaults.
You can configure the following items for each defaults set:
-
id(optional) - ID of the defaults set. If not specified, it is autogenerated. -
rule(optional) - Rule written usingpcssyntax defining when and for which resources the set applies. For information on specifying a rule, see theresource defaults set createsection of thepcs(8) man page. -
score(optional) - Weight of the defaults set. -
attrs(optional) - Meta attributes applied to resources as defaults.
-
The structure of the
ha_cluster_resource_defaultsvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures resource defaults, see Configuring a high availability cluster with resource and resource operation defaults.ha_cluster_resource_operation_defaults(RHEL 8.9 and later) This variable defines sets of resource operation defaults. You can define multiple sets of defaults and apply them to resources of specific agents and specific resource operations using rules. The defaults you specify with the
ha_cluster_resource_operation_defaultsvariable do not apply to resource operations which override them with their own defined values. By default, theha_clusterRHEL system role configures resources to define their own values for resource operations. For information about overriding these defaults with theha_cluster_resource_operations_defaultsvariable, see the description of thecopy_operations_from_agentitem inha_cluster_resource_primitives.Only meta attributes can be specified as defaults.
The structure of the
ha_cluster_resource_operations_defaultsvariable is the same as the structure for theha_cluster_resource_defaultsvariable, with the exception of how you specify a rule. For information about specifying a rule to describe the resource operation to which a set applies, see theresource op defaults set createsection of thepcs(8) man page.ha_cluster_stonith_levels(RHEL 8.10 and later) This variable defines STONITH levels, also known as fencing topology. Fencing levels configure a cluster to use multiple devices to fence nodes. You can define alternative devices in case one device fails and you can require multiple devices to all be executed successfully to consider a node successfully fenced. For more information on fencing levels, see Configuring fencing levels in Configuring and managing high availability clusters.
You can configure the following items when defining fencing levels:
-
level(mandatory) - Order in which to attempt the fencing level. Pacemaker attempts levels in ascending order until one succeeds. -
target(optional) - Name of a node this level applies to. You must specify one of the following three selections:
-
target_pattern- POSIX extended regular expression matching the names of the nodes this level applies to. -
target_attribute- Name of a node attribute that is set for the node this level applies to. -
target_attributeandtarget_value- Name and value of a node attribute that is set for the node this level applies to.
-
resouce_ids(mandatory) - List of fencing resources that must all be tried for this level.By default, no fencing levels are defined.
-
The structure of the fencing levels definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures fencing defaults, see Configuring a high availability cluster with fencing levels.ha_cluster_constraints_locationThis variable defines resource location constraints. Resource location constraints indicate which nodes a resource can run on. You can specify a resources specified by a resource ID or by a pattern, which can match more than one resource. You can specify a node by a node name or by a rule.
You can configure the following items for a resource location constraint:
-
resource(mandatory) - Specification of a resource the constraint applies to. -
node(mandatory) - Name of a node the resource should prefer or avoid. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
A positive
scorevalue means the resource prefers running on the node. -
A negative
scorevalue means the resource should avoid running on the node. -
A
scorevalue of-INFINITYmeans the resource must avoid running on the node. -
If
scoreis not specified, the score value defaults toINFINITY.
-
A positive
-
By default no resource location constraints are defined.
The structure of a resource location constraint specifying a resource ID and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern are the same items that you configure for a resource location constraint that specifies a resource ID, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint specifying a resource pattern and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource location constraint that specifies a resource ID and a rule:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
rule(mandatory) - Constraint rule written usingpcssyntax. For further information, see theconstraint locationsection of thepcs(8) man page. - Other items to specify have the same meaning as for a resource constraint that does not specify a rule.
The structure of a resource location constraint that specifies a resource ID and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern and a rule are the same items that you configure for a resource location constraint that specifies a resource ID and a rule, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint that specifies a resource pattern and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_colocationThis variable defines resource colocation constraints. Resource colocation constraints indicate that the location of one resource depends on the location of another one. There are two types of colocation constraints: a simple colocation constraint for two resources, and a set colocation constraint for multiple resources.
You can configure the following items for a simple resource colocation constraint:
resource_follower(mandatory) - A resource that should be located relative toresource_leader.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
resource_leader(mandatory) - The cluster will decide where to put this resource first and then decide where to putresource_follower.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
Positive
scorevalues indicate the resources should run on the same node. -
Negative
scorevalues indicate the resources should run on different nodes. -
A
scorevalue of+INFINITYindicates the resources must run on the same node. -
A
scorevalue of-INFINITYindicates the resources must run on different nodes. -
If
scoreis not specified, the score value defaults toINFINITY.
-
Positive
By default no resource colocation constraints are defined.
The structure of a simple resource colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set colocation constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple colocation constraint. -
options(optional) - Same values as for a simple colocation constraint.
The structure of a resource set colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_orderThis variable defines resource order constraints. Resource order constraints indicate the order in which certain resource actions should occur. There are two types of resource order constraints: a simple order constraint for two resources, and a set order constraint for multiple resources.
You can configure the following items for a simple resource order constraint:
resource_first(mandatory) - Resource that theresource_thenresource depends on.-
id(mandatory) - Resource ID. -
action(optional) - The action that must complete before an action can be initiated for theresource_thenresource. Allowed values:start,stop,promote,demote.
-
resource_then(mandatory) - The dependent resource.-
id(mandatory) - Resource ID. -
action(optional) - The action that the resource can execute only after the action on theresource_firstresource has completed. Allowed values:start,stop,promote,demote.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. -
options(optional) - List of name-value dictionaries.
By default no resource order constraints are defined.
The structure of a simple resource order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set order constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple order constraint. -
options(optional) - Same values as for a simple order constraint.
The structure of a resource set order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_ticketThis variable defines resource ticket constraints. Resource ticket constraints indicate the resources that depend on a certain ticket. There are two types of resource ticket constraints: a simple ticket constraint for one resource, and a ticket order constraint for multiple resources.
You can configure the following items for a simple resource ticket constraint:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
ticket(mandatory) - Name of a ticket the resource depends on. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.-
loss-policy(optional) - Action to perform on the resource if the ticket is revoked.
-
By default no resource ticket constraints are defined.
The structure of a simple resource ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set ticket constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
ticket(mandatory) - Same value as for a simple ticket constraint. -
id(optional) - Same value as for a simple ticket constraint. -
options(optional) - Same values as for a simple ticket constraint.
The structure of a resource set ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_qnetd(RHEL 8.8 and later) This variable configures a
qnetdhost which can then serve as an external quorum device for clusters.You can configure the following items for a
qnetdhost:-
present(optional) - Iftrue, configure aqnetdinstance on the host. Iffalse, removeqnetdconfiguration from the host. The default value isfalse. If you set thistrue, you must setha_cluster_cluster_presenttofalse. -
start_on_boot(optional) - Configures whether theqnetdinstance should start automatically on boot. The default value istrue. -
regenerate_keys(optional) - Set this variable totrueto regenerate theqnetdTLS certificate. If you regenerate the certificate, you must either re-run the role for each cluster to connect it to theqnetdhost again or runpcsmanually.
-
You cannot run
qnetdon a cluster node because fencing would disruptqnetdoperation.For an example
ha_clusterRHEL system role playbook that configures a cluster using a quorum device, see Configuring a cluster using a quorum device.
11.2. Specifying an inventory for the ha_cluster RHEL system role Copiar enlaceEnlace copiado en el portapapeles!
When configuring an HA cluster using the ha_cluster RHEL system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.
11.2.1. Configuring node names and addresses in an inventory Copiar enlaceEnlace copiado en el portapapeles!
For each node in an inventory, you can optionally specify the following items:
-
node_name- the name of a node in a cluster. -
pcs_address- an address used bypcsto communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. -
corosync_addresses- list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes.
The following example shows an inventory with targets node1 and node2. node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file.
11.2.2. Configuring watchdog and SBD devices in an inventory Copiar enlaceEnlace copiado en el portapapeles!
(RHEL 8.7 and later) When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. Even though all SBD devices must be shared to and accessible from all nodes, each node can use different names for the devices. Watchdog devices can be different for each node as well. For information about the SBD variables you can set in a system role playbook, see the entries for ha_cluster_sbd_enabled and ha_cluster_sbd_options in Variables of the ha_cluster RHEL system role.
For each node in an inventory, you can optionally specify the following items:
-
sbd_watchdog_modules(optional) - (RHEL 8.9 and later) Watchdog kernel modules to be loaded, which create/dev/watchdog*devices. Defaults to empty list if not set. -
sbd_watchdog_modules_blocklist(optional) - (RHEL 8.9 and later) Watchdog kernel modules to be unloaded and blocked. Defaults to empty list if not set. -
sbd_watchdog- Watchdog device to be used by SBD. Defaults to/dev/watchdogif not set. -
sbd_devices- Devices to use for exchanging SBD messages and for monitoring. Defaults to empty list if not set. Always refer to the devices using the long, stable device name (/dev/disk/by-id/).
The following example shows an inventory that configures watchdog and SBD devices for targets node1 and node2.
For an example procedure that creates high availability cluster that uses SBD fencing, see Configuring a high availability cluster with SBD node fencing.
11.3. Creating pcsd TLS certificates and key files for a high availability cluster Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to create Transport Layer Security (TLS) certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster RHEL system role uses the certificate RHEL system role internally to manage TLS certificates.
The connection between cluster nodes is secured using TLS encryption. By default, the pcsd daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 8.8 and later For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_pcsd_certificates: <certificate_properties>-
A variable that creates a self-signed
pcsdcertificate and private key files in/var/lib/pcsd. In this example, thepcsdcertificate has the file nameFILENAME.crtand the key file is namedFILENAME.key.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Configuring a high availability cluster running no resources Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis.
This example configures a basic two-node cluster with no fencing configured using the minimum required parameters.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Configuring a high availability cluster with fencing and resources Copiar enlaceEnlace copiado en el portapapeles!
The specific components of a cluster configuration depend on your individual needs, which vary between sites. You can use the ha_cluster RHEL system role to configure a cluster with a fencing device, cluster resources, resource groups, and a cloned resource.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role. ha_cluster_resource_clones: <resource_clones>-
A list of resource clone definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Configuring a high availability cluster with resource and resource operation defaults Copiar enlaceEnlace copiado en el portapapeles!
In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster.
For information about changing the default value of a resource option, see Changing the default value of a resource option. For information about global resource operation defaults, see Configuring global resource operation defaults.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines resource and resource operation defaults.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 8.9 and later For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_defaults: <resource_defaults>- A variable that defines sets of resource defaults.
ha_cluster_resource_operation_defaults: <resource_operation_defaults>- A variable that defines sets of resource operation defaults.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Configuring a high availability cluster with fencing levels Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to configure high availability clusters with fencing levels. With multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- RHEL 8.10 and later
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>
cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml. This example playbook file configures a cluster running thefirewalldandselinuxservices.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_stonith_levels: <stonith_levels>- A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Configuring a high availability cluster with resource constraints Copiar enlaceEnlace copiado en el portapapeles!
When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints.
You can define the following categories of resource constraints:
- Location constraints, which determine which nodes a resource can run on. For information about location constraints, see Determining which nodes a resource can run on.
- Ordering constraints, which determine the order in which the resources are run. For information about ordering constraints, see Determing the order in which cluster resources are run.
- Colocation constraints, which specify that the location of one resource depends on the location of another resource. For information about colocation constraints, see Colocating cluster resources.
- Ticket constraints, which indicate the resources that depend on a particular Booth ticket. For information about Booth ticket constraints, see Multi-site Pacemaker clusters.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_constraints_location: <location_constraints>- A variable that defines resource location constraints.
ha_cluster_constraints_colocation: <colocation_constraints>- A variable that defines resource colocation constraints.
ha_cluster_constraints_order: <order_constraints>- A variable that defines resource order constraints.
ha_cluster_constraints_ticket: <ticket_constraints>- A variable that defines Booth ticket constraints.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9. Configuring Corosync values in a high availability cluster Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to configure Corosync values in high availability clusters.
The corosync.conf file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf file. In general, you should not edit the corosync.conf file directly. You can, however, configure Corosync values by using the ha_cluster RHEL system role.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - RHEL 8.7 and later
- The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_transport: <transport_method>- A variable that sets the cluster transport method.
ha_cluster_totem: <totem_options>- A variable that configures Corosync totem options.
ha_cluster_quorum: <quorum_options>- A variable that configures cluster quorum options.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Configuring a high availability cluster with SBD node fencing Copiar enlaceEnlace copiado en el portapapeles!
(RHEL 8.7 and later) The following procedure uses the ha_cluster RHEL system role to create a high availability cluster that uses SBD node fencing.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
This playbook uses an inventory file that loads a watchdog module (supported in RHEL 8.9 and later) as described in Configuring watchdog and SBD devices in an inventory.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example playbook file configures a cluster running the
firewalldandselinuxservices that uses SBD fencing and creates the SBD Stonith resource.When creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11. Configuring a high availability cluster using a quorum device Copiar enlaceEnlace copiado en el portapapeles!
Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. Use a quorum device for clusters with an even number of nodes.
With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.
For information about quorum devices, see Configuring quorum devices.
To configure a high availability cluster with a separate quorum device by using the ha_cluster RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.
This feature is available in RHEL 8.8 and later.
11.11.1. Configuring a quorum device Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to configure a quorum device for high availability clusters. Note that you cannot run a quorum device on a cluster node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the quorum devices as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook-qdevice.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_present: false-
A variable that, if set to
false, determines that all cluster configuration will be removed from the target host. ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_qnetd: <quorum_device_options>-
A variable that configures a
qnetdhost.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11.2. Configuring a cluster to use a quorum device Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to configure a cluster with a quorum device.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
- You have configured a quorum device.
Procedure
Create a playbook file, for example,
~/playbook-cluster-qdevice.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_quorum: <quorum_parameters>- A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.12. Configuring a high availability cluster with node attributes Copiar enlaceEnlace copiado en el portapapeles!
You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints.
Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures node attributes.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 8.10 and later For general information about creating an inventory file, see Preparing a control node on RHEL 8.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_node_options: <node_settings>- A variable that defines various settings that vary from one cluster node to another.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Additional resources
11.13. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster RHEL system role Copiar enlaceEnlace copiado en el portapapeles!
You can use the ha_cluster RHEL system role to configure an Apache HTTP server in a high availability cluster.
High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat High Availability Add-On Documentation Guide.
The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
This example uses an APC power switch with a host name of zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 8.
- You have configured an LVM logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster.
- You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server.
- Your system includes an APC power switch that will be used to fence the cluster nodes.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_fence_agent_packages: <fence_agent_packages>- A list of fence agent packages to install.
ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you use the
apacheresource agent to manage Apache, it does not usesystemd. Because of this, you must edit thelogrotatescript supplied with Apache so that it does not usesystemctlto reload Apache.Remove the following line in the
/etc/logrotate.d/httpdfile on each node in the cluster./bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
# /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 8.6 and later, replace the line you removed with the following three lines, specifying
/var/run/httpd-website.pidas the PID file path where website is the name of the Apache resource. In this example, the Apache resource name isWebsite./usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 8.5 and earlier, replace the line you removed with the following three lines.
/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node,
z1.example.com.If you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".Hello
HelloCopy to Clipboard Copied! Toggle word wrap Toggle overflow To test whether the resource group running on
z1.example.comfails over to nodez2.example.com, put nodez1.example.cominstandbymode, after which the node will no longer be able to host resources.pcs node standby z1.example.com
[root@z1 ~]# pcs node standby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow After putting node
z1instandbymode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running onz2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The website at the defined IP address should still display, without interruption.
To remove
z1fromstandbymode, enter the following command.pcs node unstandby z1.example.com
[root@z1 ~]# pcs node unstandby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemoving a node from
standbymode does not in itself cause the resources to fail back over to that node. This will depend on theresource-stickinessvalue for the resources. For information about theresource-stickinessmeta attribute, see Configuring a resource to prefer its current node.