Este conteúdo não está disponível no idioma selecionado.
Chapter 11. Configuring a high-availability cluster by using the ha_cluster system role
With the ha_cluster
system role, you can configure and manage a system-roles cluster that uses the Pacemaker high availability cluster resource manager.
11.1. Specifying an inventory for the ha_cluster RHEL system role Copiar o linkLink copiado para a área de transferência!
When configuring an HA cluster using the ha_cluster
system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.
For each node in an inventory, you can optionally specify the following items:
-
node_name
- the name of a node in a cluster. -
pcs_address
- an address used bypcs
to communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. -
corosync_addresses
- list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes.
The following example shows an inventory with targets node1
and node2
. node1
and node2
must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts
file.
You can optionally configure watchdog and SBD devices for each node in an inventory. All SBD devices must be shared to and accessible from all nodes. Watchdog devices can be different for each node as well. For an example procedure that configures SBD node fencing in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
11.2. Creating pcsd TLS certificates and key files for a high availability cluster Copiar o linkLink copiado para a área de transferência!
The connection between cluster nodes is secured using Transport Layer Security (TLS) encryption. By default, the pcsd
daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd
.
You can use the ha_cluster
RHEL system role to create TLS certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster
RHEL system role uses the certificate
RHEL system role internally to manage TLS certificates.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_pcsd_certificates: <certificate_properties>
-
A variable that creates a self-signed
pcsd
certificate and private key files in/var/lib/pcsd
. In this example, thepcsd
certificate has the file nameFILENAME.crt
and the key file is namedFILENAME.key
.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Configuring a high availability cluster running no resources Copiar o linkLink copiado para a área de transferência!
You can use the ha_cluster
system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs
command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis. The following example procedure configures a basic two-node cluster with no fencing configured using the minimum required parameters.
The ha_cluster
system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Configuring a high availability cluster with fencing and resources Copiar o linkLink copiado para a área de transferência!
The specific components of a cluster configuration depend on your individual needs, which vary between sites. The following example procedure shows the formats for configuring different cluster components by using the ha_cluster
RHEL system role. The configured cluster includes a fencing device, cluster resources, resource groups, and a cloned resource.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_resource_primitives: <cluster_resources>
- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>
-
A list of resource group definitions configured by the
ha_cluster
RHEL system role. ha_cluster_resource_clones: <resource_clones>
-
A list of resource clone definitions configured by the
ha_cluster
RHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Configuring a high availability cluster with resource and resource operation defaults Copiar o linkLink copiado para a área de transferência!
In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster.
For information about changing the default value of a resource option, see link: Changing the default value of a resource option. For information about global resource operation defaults, see Configuring global resource operation defaults.
The following example procedure uses the ha_cluster
RHEL system role to create a high availability cluster that defines resource and resource operation defaults.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_resource_defaults: <resource_defaults>
- A variable that defines sets of resource defaults.
ha_cluster_resource_operation_defaults: <resource_operation_defaults>
- A variable that defines sets of resource operation defaults.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Configuring a high availability cluster with fencing levels Copiar o linkLink copiado para a área de transferência!
When you configure multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node. For information about fencing levels, see Configuring fencing levels.
The following example procedure uses the ha_cluster
RHEL system role to create a high availability cluster that defines fencing levels.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>
cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
. This example playbook file configures a cluster running thefirewalld
andselinux
services.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_resource_primitives: <cluster_resources>
- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_stonith_levels: <stonith_levels>
- A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Configuring a high availability cluster with resource constraints using system roles Copiar o linkLink copiado para a área de transferência!
When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints.
You can define the following categories of resource constraints:
- Location constraints, which determine which nodes a resource can run on. For information about location constraints, see Determining which nodes a resource can run on.
- Ordering constraints, which determine the order in which the resources are run. For information about ordering constraints, see Determing the order in which cluster resources are run.
- Colocation constraints, which specify that the location of one resource depends on the location of another resource. For information about colocation constraints, see Colocating cluster resources.
- Ticket constraints, which indicate the resources that depend on a particular Booth ticket. For information about Booth ticket constraints, see Multi-site Pacemaker clusters.
The following example procedure uses the ha_cluster
RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_resource_primitives: <cluster_resources>
- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_constraints_location: <location_constraints>
- A variable that defines resource location constraints.
ha_cluster_constraints_colocation: <colocation_constraints>
- A variable that defines resource colocation constraints.
ha_cluster_constraints_order: <order_constraints>
- A variable that defines resource order constraints.
ha_cluster_constraints_ticket: <ticket_constraints>
- A variable that defines Booth ticket constraints.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Configuring Corosync values in a high availability cluster using RHEL system roles Copiar o linkLink copiado para a área de transferência!
The corosync.conf
file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf
file. In general, you should not edit the corosync.conf
file directly. You can, however, configure Corosync values by using the ha_cluster
RHEL system role.
The following example procedure uses the ha_cluster
RHEL system role to create a high availability cluster that configures Corosync values.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_transport: <transport_method>
- A variable that sets the cluster transport method.
ha_cluster_totem: <totem_options>
- A variable that configures Corosync totem options.
ha_cluster_quorum: <quorum_options>
- A variable that configures cluster quorum options.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9. Exporting a cluster configuration to create a RHEL system role playbook Copiar o linkLink copiado para a área de transferência!
You can use the ha_cluster
RHEL system role to export the Corosync configuration of a cluster into ha_cluster
variables that can be fed back to the role to recreate the same cluster. If you did not use ha_cluster
to create your cluster, or if you do not have access to the original playbook for the cluster, you can use this feature to build a new playbook for creating the cluster.
When you export a cluster’s configuration by using the ha_cluster
RHEL system role, not all of the variables are exported. You must manually modify the configuration to account for these variables.
The following variables are present in the export:
-
ha_cluster_cluster_present
-
ha_cluster_start_on_boot
-
ha_cluster_cluster_name
-
ha_cluster_transport
-
ha_cluster_totem
-
ha_cluster_quorum
-
ha_cluster_node_options
- Only thenode_name
,corosync_addresses
andpcs_address
options are present.
The following variables are not present in the export:
-
ha_cluster_hacluster_password
- This is a mandatory variable for the role but it cannot be extracted from existing clusters. -
ha_cluster_corosync_key_src
,ha_cluster_pacemaker_key_src
andha_cluster_fence_virt_key_src
- These variables should contain paths to files with Corosync and Pacemaker keys. Since the keys themselves are not exported, these variables are not present in the export either. These keys should be unique for each cluster. -
ha_cluster_regenerate_keys
- You should decide whether to use existing keys or to generate new ones.
To export the current cluster configuration, run the ha_cluster
RHEL system role and set ha_cluster_export_configuration: true
. This triggers the export once the role finishes configuring a cluster or a qnetd
host and stores it in the ha_cluster_facts
variable.
By default, ha_cluster_cluster_present
is set to true
and ha_cluster_qnetd.present
is set to false
. These settings will reconfigure your cluster on the specified hosts, remove qnetd
configuration from the specified hosts, and then export the configuration. To trigger the export without modifying an existing configuration, run the role with the following settings:
The following procedure:
-
Exports the cluster configuration from cluster node
node1
into theha_cluster_facts
variable. -
Sets the
ha_cluster_cluster_present
andha_cluster_qnetd
variables to null to ensure that running this playbook does not modify the existing cluster configuration. -
Uses the Ansible debug module to display the content of
ha_cluster_facts
. -
Saves the contents of
ha_cluster_facts
to a file on the control node in a YAML format for you to write a playbook around it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have previously configured the high availability cluster with the configuration to export.
- You have created an inventory file on the control node, as described in Preparing a control node on RHEL 10.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hosts: node1
- A node containing the cluster information to export.
ha_cluster_cluster_present: null
- Setting to indicate that the cluster configuration will not be changed on the specified host.
ha_cluster_qnetd: null
- Setting to indicate that the qnetd host configuration will not be changed on the specified host.
ha_cluster_export_configuration: true
-
A variable that determines whether to export the current cluster configuration and store it in the
ha_cluster_facts
variable, which is generated by theha_cluster_info
module. ha_cluster_facts
- A variable that contains the exported cluster configuration.
delegate_to: localhost
- Specifies the control node as the location for the exported configuration file.
content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }"}
,dest: /path/to/file
,mode: "0640"
Copies the configuration file in a YAML format to /path/to/file, setting the file permissions to 0640.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.
Write a playbook for your system using the variables you exported to /path/to/file on the control node.
You must add the
ha_cluster_hacluster_password
variable, as it is a required variable but is not present in the export. Optionally, add theha_cluster_corosync_key_src
,ha_cluster_pacemaker_key_src
,ha_cluster_fence_virt_key_src
, andha_cluster_regenerate_keys
variables if your system requires them. These variables are never exported.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Configuring a high availability cluster that implements access control lists (ACLs) by using the ha_cluster RHEL system role Copiar o linkLink copiado para a área de transferência!
The pcs
administration account for a cluster is hacluster
. Using access control lists (ACLs), you can grant permission for specific local users other than user hacluster
to manage a Pacemaker cluster. A common use case for this feature is to restrict unauthorized users from accessing business-sensitive information.
By default, ACLs are not enabled. Consequently, any member of the group haclient
on all nodes has full local read and write access to the cluster configuratioan. Users who are not members of haclient
have no access. When ACLs are enabled, however, even users who are members of the haclient
group have access only to what has been granted to that user by the ACLs. The root
and hacluster
user accounts always have full access to the cluster configuration, even when ACLs are enabled.
When you set permissions for local users with ACLs, you create a role which defines the permissions for that role. You then assign that role to a user. If you assign multiple roles to the same user, any deny permission takes precedence, then write, then read.
The following example procedure uses the ha_cluster
RHEL system role to create in an automated fashion a high availability cluster that implements ACLs to control access to the cluster configuration.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_resource_primitives: <cluster resources>
-
A list of resource definitions for the Pacemaker resources configured by the
ha_cluster
RHEL system role, including fencing resources. ha_cluster_cluster_properties: <cluster properties>
- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_acls: <dictionary>
- A dictionary of ACL role, user, and group values.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11. Configuring a high availability cluster with SBD node fencing by using the ha_cluster_node_options variable Copiar o linkLink copiado para a área de transferência!
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a STONITH Block Device (SBD). This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
You can use the ha_cluster
RHEL system role to configure SBD fencing in an automated fashion. With ha_cluster
, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options
: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster
: A dictionary that defines options for one node only. You configure theha_cluster
variable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options
and ha_cluster
variables contain SBD options, those in ha_cluster_node_options
have precedence.
This example procedure uses the ha_cluster_node_options
variable in a playbook file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster
variable in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_sbd_enabled: true
- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: <sbd options>
-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environment
section of thesbd
(8) man page on your system. ha_cluster_node_options: <node options>
A variable that defines settings which vary from one cluster node to another. You can configure the following SBD and watchdog items:
-
sbd_watchdog_modules
- Modules to be loaded, which create/dev/watchdog*
devices. -
sbd_watchdog_modules_blocklist
- Watchdog kernel modules to be unloaded and blocked. -
sbd_watchdog
- Watchdog device to be used by SBD. -
sbd_devices
- Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (/dev/disk/by-id/
).
-
ha_cluster_cluster_properties: <cluster properties>
- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: <cluster resources>
-
A list of resource definitions for the Pacemaker resources configured by the
ha_cluster
RHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.12. Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable Copiar o linkLink copiado para a área de transferência!
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a STONITH Block Device (SBD). This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
You can use the ha_cluster
RHEL system role to configure SBD fencing in an automated fashion. With ha_cluster
, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options
: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster
: A dictionary that defines options for one node only. You configure theha_cluster
variable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options
and ha_cluster
variables contain SBD options, those in ha_cluster_node_options
have precedence.
If both the ha_cluster_node_options
and ha_cluster
variables contain SBD options, those in ha_cluster_node_options
have precedence.`
The following example procedure uses the ha_cluster
system role to create a high availability cluster with SBD fencing. This example procedure uses the ha_cluster
variable in an inventory file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster_node_options
variable in a playbook file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_nodes_options
variable.
The ha_cluster
system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Create an inventory file for your cluster that configures watchdog and SBD devices for each node by using the
ha_cluster
variable, as in the following exampla;eCopy to Clipboard Copied! Toggle word wrap Toggle overflow The SBD and watchdog settings specified in the example inventory include the following:
sbd_watchdog_modules
-
Watchdog kernel modules to be loaded, which create
/dev/watchdog*
devices. sbd_watchdog_modules_blocklist
- Watchdog kernel modules to be unloaded and blocked.
sbd_watchdog
- Watchdog device to be used by SBD.
sbd_devices
-
Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (
/dev/disk/by-id/
).
For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, as in the following example. Since you have specified the SBD and watchog variables in an inventory, you do not need to include them in the playbook.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: cluster_name
- The name of the cluster you are creating.
ha_cluster_hacluster_password: password
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_sbd_enabled: true
- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: sbd options
-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environment
section of thesbd
(8) man page on your system. ha_cluster_cluster_properties: cluster properties
- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: cluster resources
-
A list of resource definitions for the Pacemaker resources configured by the
ha_cluster
RHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.13. Configuring a placement strategy for a high availability cluster by using the RHEL ha_cluster RHEL system role Copiar o linkLink copiado para a área de transferência!
A Pacemaker cluster allocates resources according to a resource allocation score. By default, if the resource allocation scores on all the nodes are equal, Pacemaker allocates the resource to the node with the smallest number of allocated resources. If the resources in your cluster use significantly different proportions of a node’s capacities, such as memory or I/O, the default behavior may not be the best strategy for balancing your system’s workload. In this case, you can customize an allocation strategy by configuring utilization attributes and placement strategies for nodes and resources.
For detailed information about configuring utilization attributes and placement strategies, see Configuring a node placement strategy.
This example procedure uses the ha_cluster
RHEL system role to create a high availability cluster in an automated fashion that configures utilization attributes to define a placement strategy.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_cluster_properties: <cluster properties>
-
List of sets of cluster properties for Pacemaker cluster-wide configuration. For utilization to have an effect, the
placement-strategy
property must be set and its value must be different from the valuedefault
. - `ha_cluster_node_options: <node options>
- A variable that defines various settings which vary from cluster node to cluster node.
ha_cluster_resource_primitives: <cluster resources>
A list of resource definitions for the Pacemaker resources configured by the
ha_cluster
RHEL system role, including fencing resources.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.14. Configuring alerts for a high availability cluster by using the ha_cluster RHEL system role Copiar o linkLink copiado para a área de transferência!
When a Pacemaker event occurs, such as a resource or a node failure or a configuration change, you may want to take some external action. For example, you may want to send an email message or log to a file or update a monitoring system.
You can configure your system to take an external action by using alert agents. These are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent through environment variables.
The ha_cluster
RHEL system role configures the cluster to call external programs to handle alerts. However, you must provide these programs and distribute them to cluster nodes.
For more detailed information about alert agents, see Triggering scripts for cluster events.
This example procedure uses the ha_cluster
RHEL system role to create a high availability cluster in an automated fashion that configures a Pacemaker alert.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_alerts: <alert definitions>
A variable that defines Pacemaker alerts.
-
id
- ID of an alert. -
path
- Path to the alert agent executable. -
description
- Description of the alert. -
instance_attrs
- List of sets of the alert’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs
- List of sets of the alert’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
recipients
- List of alert’s recipients. -
value
- Value of a recipient. -
id
- ID of the recipient. -
description
- Description of the recipient. -
instance_attrs
-List of sets of the recipient’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs
- List of sets of the recipient’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
-
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.15. Configuring a high availability cluster with a quorum device using RHEL system roles Copiar o linkLink copiado para a área de transferência!
Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.
For information about quorum devices, see Configuring quorum devices.
To configure a high availability cluster with a separate quorum device by using the ha_cluster
RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.
11.15.1. Configuring a quorum device Copiar o linkLink copiado para a área de transferência!
To configure a quorum device using the ha_cluster
RHEL system role, follow the steps in this example procedure. Note that you cannot run a quorum device on a cluster node.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook-qdevice.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_present: false
-
A variable that, if set to
false
, determines that all cluster configuration will be removed from the target host. ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_qnetd: <quorum_device_options>
A variable that configures a
qnetd
host.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.15.2. Configuring a cluster to use a quorum device Copiar o linkLink copiado para a área de transferência!
To configure a cluster to use a quorum device, follow the steps in this example procedure.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Create a playbook file, for example,
~/playbook-cluster-qdevice.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_quorum: <quorum_parameters>
A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.16. Configuring a high availability cluster with node attributes using RHEL system roles Copiar o linkLink copiado para a área de transferência!
You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints.
Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules.
The following example procedure uses the ha_cluster
RHEL system role to create a high availability cluster that configures node attributes.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_node_options: <node_settings>
- A variable that defines various settings that vary from one cluster node to another.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.17. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster RHEL system role Copiar o linkLink copiado para a área de transferência!
High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat Knowledgebase article Red Hat High Availability Add-On Documentation Guide.
The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster
RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
This example uses an APC power switch with a host name of zapc.example.com
. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages
variable, as in this example.
The ha_cluster
RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
- You have configured an LVM logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster.
- You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server.
- Your system includes an APC power switch that will be used to fence the cluster nodes.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:cluster_password: <cluster_password>
cluster_password: <cluster_password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>
- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>
-
The password of the
hacluster
user. Thehacluster
user has full access to a cluster. ha_cluster_manage_firewall: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the firewall. ha_cluster_manage_selinux: true
-
A variable that determines whether the
ha_cluster
RHEL system role manages the ports of the firewall high availability service using theselinux
RHEL system role. ha_cluster_fence_agent_packages: <fence_agent_packages>
- A list of fence agent packages to install.
ha_cluster_resource_primitives: <cluster_resources>
- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>
-
A list of resource group definitions configured by the
ha_cluster
RHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you use the
apache
resource agent to manage Apache, it does not usesystemd
. Because of this, you must edit thelogrotate
script supplied with Apache so that it does not usesystemctl
to reload Apache.Remove the following line in the
/etc/logrotate.d/httpd
file on each node in the cluster./bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
# /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the line you removed with the following three lines, specifying
/var/run/httpd-website.pid
as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name isWebsite
./usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node,
z1.example.com
.If you find that the resources you configured are not running, you can run the
pcs resource debug-start resource
command to test the resource configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2
resource to view the sample display, consisting of the simple word "Hello".Hello
Hello
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To test whether the resource group running on
z1.example.com
fails over to nodez2.example.com
, put nodez1.example.com
instandby
mode, after which the node will no longer be able to host resources.pcs node standby z1.example.com
[root@z1 ~]# pcs node standby z1.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After putting node
z1
instandby
mode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running onz2
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The web site at the defined IP address should still display, without interruption.
To remove
z1
fromstandby
mode, enter the following command.pcs node unstandby z1.example.com
[root@z1 ~]# pcs node unstandby z1.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemoving a node from
standby
mode does not in itself cause the resources to fail back over to that node. This will depend on theresource-stickiness
value for the resources. For information about theresource-stickiness
meta attribute, see Configuring a resource to prefer its current node.