이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 11. Configuring a high-availability cluster by using the ha_cluster system role
With the ha_cluster system role, you can configure and manage a system-roles cluster that uses the Pacemaker high availability cluster resource manager.
11.1. Specifying an inventory for the ha_cluster RHEL system role 링크 복사링크가 클립보드에 복사되었습니다!
When configuring an HA cluster using the ha_cluster system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.
For each node in an inventory, you can optionally specify the following items:
-
node_name- the name of a node in a cluster. -
pcs_address- an address used bypcsto communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. -
corosync_addresses- list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes.
The following example shows an inventory with targets node1 and node2. node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file.
all:
hosts:
node1:
ha_cluster:
node_name: node-A
pcs_address: node1-address
corosync_addresses:
- 192.168.1.11
- 192.168.2.11
node2:
ha_cluster:
node_name: node-B
pcs_address: node2-address:2224
corosync_addresses:
- 192.168.1.12
- 192.168.2.12
You can optionally configure watchdog and SBD devices for each node in an inventory. All SBD devices must be shared to and accessible from all nodes. Watchdog devices can be different for each node as well. For an example procedure that configures SBD node fencing in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
11.2. Creating pcsd TLS certificates and key files for a high availability cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to create Transport Layer Security (TLS) certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster RHEL system role uses the certificate RHEL system role internally to manage TLS certificates.
The connection between cluster nodes is secured using TLS encryption. By default, the pcsd daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create TLS certificates and key files in a high availability cluster ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_pcsd_certificates: - name: FILENAME common_name: "{{ ansible_hostname }}" ca: self-signThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_pcsd_certificates: <certificate_properties>-
A variable that creates a self-signed
pcsdcertificate and private key files in/var/lib/pcsd. In this example, thepcsdcertificate has the file nameFILENAME.crtand the key file is namedFILENAME.key.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.3. Configuring a high availability cluster running no resources 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis.
This example configures a basic two-node cluster with no fencing configured using the minimum required parameters.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster with minimum required parameters and no fencing ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: trueThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.4. Configuring a high availability cluster with fencing and resources 링크 복사링크가 클립보드에 복사되었습니다!
The specific components of a cluster configuration depend on your individual needs, which vary between sites. You can use the ha_cluster RHEL system role to configure a cluster with a fencing device, cluster resources, resource groups, and a cloned resource.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster with fencing and resources ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: simple-resource agent: 'ocf:pacemaker:Dummy' - id: resource-with-options agent: 'ocf:pacemaker:Dummy' instance_attrs: - attrs: - name: fake value: fake-value - name: passwd value: passwd-value meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' operations: - action: start attrs: - name: timeout value: '30s' - action: monitor attrs: - name: timeout value: '5' - name: interval value: '1min' - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: simple-clone agent: 'ocf:pacemaker:Dummy' - id: clone-with-options agent: 'ocf:pacemaker:Dummy' ha_cluster_resource_groups: - id: simple-group resource_ids: - dummy-1 - dummy-2 meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' - id: cloned-group resource_ids: - dummy-3 ha_cluster_resource_clones: - resource_id: simple-clone - resource_id: clone-with-options promotable: yes id: custom-clone-id meta_attrs: - attrs: - name: clone-max value: '2' - name: clone-node-max value: '1' - resource_id: cloned-group promotable: yesThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role. ha_cluster_resource_clones: <resource_clones>-
A list of resource clone definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.5. Configuring a high availability cluster with resource and resource operation defaults 링크 복사링크가 클립보드에 복사되었습니다!
In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster.
For information about changing the default value of a resource option, see Changing the default value of a resource option. For information about global resource operation defaults, see Configuring global resource operation defaults.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines resource and resource operation defaults.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster with fencing and resource operation defaults ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # Set a different resource-stickiness value during # and outside work hours. This allows resources to # automatically move back to their most # preferred hosts, but at a time that # does not interfere with business activities. ha_cluster_resource_defaults: meta_attrs: - id: core-hours rule: date-spec hours=9-16 weekdays=1-5 score: 2 attrs: - name: resource-stickiness value: INFINITY - id: after-hours score: 1 attrs: - name: resource-stickiness value: 0 # Default the timeout on all 10-second-interval # monitor actions on IPaddr2 resources to 8 seconds. ha_cluster_resource_operation_defaults: meta_attrs: - rule: resource ::IPaddr2 and op monitor interval=10s score: INFINITY attrs: - name: timeout value: 8sThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_defaults: <resource_defaults>- A variable that defines sets of resource defaults.
ha_cluster_resource_operation_defaults: <resource_operation_defaults>- A variable that defines sets of resource operation defaults.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.6. Configuring a high availability cluster with fencing levels 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to configure high availability clusters with fencing levels. With multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml. This example playbook file configures a cluster running thefirewalldandselinuxservices.--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster that defines fencing levels ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: apc1 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc1.example.com - name: username value: user - name: password value: "{{ fence1_password }}" - name: pcmk_host_map value: node1:1;node2:2 - id: apc2 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc2.example.com - name: username value: user - name: password value: "{{ fence2_password }}" - name: pcmk_host_map value: node1:1;node2:2 # Nodes have redundant power supplies, apc1 and apc2. Cluster must # ensure that when attempting to reboot a node, both power # supplies # are turned off before either power supply is turned # back on. ha_cluster_stonith_levels: - level: 1 target: node1 resource_ids: - apc1 - apc2 - level: 1 target: node2 resource_ids: - apc1 - apc2The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_stonith_levels: <stonith_levels>- A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.7. Configuring a high availability cluster with resource constraints using system roles 링크 복사링크가 클립보드에 복사되었습니다!
When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints.
You can define the following categories of resource constraints:
- Location constraints, which determine which nodes a resource can run on. For information about location constraints, see Determining which nodes a resource can run on.
- Ordering constraints, which determine the order in which the resources are run. For information about ordering constraints, see Determing the order in which cluster resources are run.
- Colocation constraints, which specify that the location of one resource depends on the location of another resource. For information about colocation constraints, see Colocating cluster resources.
- Ticket constraints, which indicate the resources that depend on a particular Booth ticket. For information about Booth ticket constraints, see Multi-site Pacemaker clusters.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster with resource constraints ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # In order to use constraints, we need resources # the constraints will apply to. ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: dummy-4 agent: 'ocf:pacemaker:Dummy' - id: dummy-5 agent: 'ocf:pacemaker:Dummy' - id: dummy-6 agent: 'ocf:pacemaker:Dummy' # location constraints ha_cluster_constraints_location: # resource ID and node name - resource: id: dummy-1 node: node1 options: - name: score value: 20 # resource pattern and node name - resource: pattern: dummy-\d+ node: node1 options: - name: score value: 10 # resource ID and rule - resource: id: dummy-2 rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28' # resource pattern and rule - resource: pattern: dummy-\d+ rule: node-type eq weekend and date-spec weekdays=6-7 # colocation constraints ha_cluster_constraints_colocation: # simple constraint - resource_leader: id: dummy-3 resource_follower: id: dummy-4 options: - name: score value: -5 # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 - resource_ids: - dummy-5 - dummy-6 options: - name: sequential value: "false" options: - name: score value: 20 # order constraints ha_cluster_constraints_order: # simple constraint - resource_first: id: dummy-1 resource_then: id: dummy-6 options: - name: symmetrical value: "false" # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 options: - name: require-all value: "false" - name: sequential value: "false" - resource_ids: - dummy-3 - resource_ids: - dummy-4 - dummy-5 options: - name: sequential value: "false" # ticket constraints ha_cluster_constraints_ticket: # simple constraint - resource: id: dummy-1 ticket: ticket1 options: - name: loss-policy value: stop # set constraint - resource_sets: - resource_ids: - dummy-3 - dummy-4 - dummy-5 ticket: ticket2 options: - name: loss-policy value: fenceThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_constraints_location: <location_constraints>- A variable that defines resource location constraints.
ha_cluster_constraints_colocation: <colocation_constraints>- A variable that defines resource colocation constraints.
ha_cluster_constraints_order: <order_constraints>- A variable that defines resource order constraints.
ha_cluster_constraints_ticket: <ticket_constraints>- A variable that defines Booth ticket constraints.
Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.8. Configuring Corosync values in a high availability cluster using RHEL system roles 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to configure Corosync values in high availability clusters.
The corosync.conf file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf file. In general, you should not edit the corosync.conf file directly. You can, however, configure Corosync values by using the ha_cluster RHEL system role.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster that configures Corosync values ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_transport: type: knet options: - name: ip_version value: ipv4-6 links: - - name: linknumber value: 1 - name: link_priority value: 5 - - name: linknumber value: 0 - name: link_priority value: 10 compression: - name: level value: 5 - name: model value: zlib crypto: - name: cipher value: none - name: hash value: none ha_cluster_totem: options: - name: block_unlisted_ips value: 'yes' - name: send_join value: 0 ha_cluster_quorum: options: - name: auto_tie_breaker value: 1 - name: wait_for_all value: 1The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_transport: <transport_method>- A variable that sets the cluster transport method.
ha_cluster_totem: <totem_options>- A variable that configures Corosync totem options.
ha_cluster_quorum: <quorum_options>- A variable that configures cluster quorum options.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
11.9. Exporting a cluster configuration to create a RHEL system role playbook 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to export the Corosync configuration of a cluster into ha_cluster variables that can be fed back to the role to recreate the same cluster.
If you did not use ha_cluster to create your cluster, or if you do not have access to the original playbook for the cluster, you can use this feature to build a new playbook for creating the cluster.
When you export a cluster’s configuration by using the ha_cluster RHEL system role, not all of the variables are exported. You must manually modify the configuration to account for these variables.
The following variables are present in the export:
-
ha_cluster_cluster_present -
ha_cluster_start_on_boot -
ha_cluster_cluster_name -
ha_cluster_transport -
ha_cluster_totem -
ha_cluster_quorum -
ha_cluster_node_options- Only thenode_name,corosync_addressesandpcs_addressoptions are present. -
ha_cluster_enable_repos -
ha_cluster_enable_repos_resilient_storage -
ha_cluster_manage_firewall -
ha_cluster_manage_selinux -
ha_cluster_install_cloud_agents -
ha_cluster_pcs_permission_list -
ha_cluster_resource_primitives -
ha_cluster_resource_groups -
ha_cluster_resource_clones -
ha_cluster_resource_bundles
The following variables are not present in the export:
-
ha_cluster_hacluster_password- This is a mandatory variable for the role but it cannot be extracted from existing clusters. -
ha_cluster_corosync_key_src,ha_cluster_pacemaker_key_srcandha_cluster_fence_virt_key_src- These variables should contain paths to files with Corosync and Pacemaker keys. Since the keys themselves are not exported, these variables are not present in the export either. These keys should be unique for each cluster. -
ha_cluster_regenerate_keys- You should decide whether to use existing keys or to generate new ones. -
ha_cluster_hacluster_qdevice_password- Specifies the password for theha_clusteruser on a quorum device. This is a required setting when using a quorum device to avoid manual intervention. It cannot be extracted from existing clusters. -
ha_cluster_fence_agent_packages- A list of fence agent packages to install. -
ha_cluster_extra_packages- Specifies any extra packages to be installed on the cluster nodes. -
ha_cluster_use_latest_packages- When set totrue, the system role will use the latest available packages for the cluster. -
ha_cluster_pcsd_public_key_src,ha_cluster_pcsd_private_key_src- These variables should contain paths to TLS certificate and private key forpcsd. Since the certificate and key themselves are not exported, these variables are not present in the export either. -
ha_cluster_pcsd_certificates- When this variable is set, the certificate RHEL system role is used internally to create the private key and certificate forpcsd. It cannot be extracted from existing clusters.
To export the current cluster configuration, run the ha_cluster RHEL system role and set ha_cluster_export_configuration: true. This triggers the export once the role finishes configuring a cluster or a qnetd host and stores it in the ha_cluster_facts variable.
By default, ha_cluster_cluster_present is set to true and ha_cluster_qnetd.present is set to false. These settings will reconfigure your cluster on the specified hosts, remove qnetd configuration from the specified hosts, and then export the configuration. To trigger the export without modifying an existing configuration, run the role with the following settings:
- hosts: node1
vars:
ha_cluster_cluster_present: null
ha_cluster_qnetd: null
ha_cluster_export_configuration: true
roles:
- linux-system-roles.ha_cluster
The following procedure:
-
Exports the cluster configuration from cluster node
node1into theha_cluster_factsvariable. -
Sets the
ha_cluster_cluster_presentandha_cluster_qnetdvariables to null to ensure that running this playbook does not modify the existing cluster configuration. -
Uses the Ansible debug module to display the content of
ha_cluster_facts. -
Saves the contents of
ha_cluster_factsto a file on the control node in a YAML format for you to write a playbook around it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - You have previously configured the high availability cluster with the configuration to export.
- You have created an inventory file on the control node, as described in Preparing a control node on RHEL 10.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Export high availability cluster configuration hosts: node1 Tasks: - name: Export configuration that does not modify existing cluster ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_present: null ha_cluster_qnetd: null ha_cluster_export_configuration: true - name: Print ha_cluster_info_result variable ansible.builtin.debug: var: ha_cluster_facts - name: Save current cluster configuration to a file delegate_to: localhost ansible.builtin.copy: content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }}" dest: /path/to/file mode: "0640"The settings specified in the example playbook include the following:
hosts: node1- A node containing the cluster information to export.
ha_cluster_cluster_present: null- Setting to indicate that the cluster configuration will not be changed on the specified host.
ha_cluster_qnetd: null- Setting to indicate that the qnetd host configuration will not be changed on the specified host.
ha_cluster_export_configuration: true-
A variable that determines whether to export the current cluster configuration and store it in the
ha_cluster_factsvariable, which is generated by theha_cluster_infomodule. ha_cluster_facts- A variable that contains the exported cluster configuration.
delegate_to: localhost- Specifies the control node as the location for the exported configuration file.
content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }"},dest: /path/to/file,mode: "0640"Copies the configuration file in a YAML format to /path/to/file, setting the file permissions to 0640.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.ymlAfter you have exported the current cluster configuration, you can write a playbook for your system using the variables you exported to /path/to/file on the control node.
You must add the
ha_cluster_hacluster_passwordvariable, as it is a required variable but is not present in the export. Optionally, add theha_cluster_corosync_key_src,ha_cluster_pacemaker_key_src,ha_cluster_fence_virt_key_src, andha_cluster_regenerate_keysvariables if your system requires them. These variables are never exported.
You can use the ha_cluster RHEL system role to configure high availability clusters with access control lists (ACLs). With ACLs, you can grant permission for specific local users other than user hacluster to manage a Pacemaker cluster.
A common use case for this feature is to restrict unauthorized users from accessing business-sensitive information.
By default, ACLs are not enabled. Consequently, any member of the group haclient on all nodes has full local read and write access to the cluster configuratioan. Users who are not members of haclient have no access. When ACLs are enabled, however, even users who are members of the haclient group have access only to what has been granted to that user by the ACLs. The root and hacluster user accounts always have full access to the cluster configuration, even when ACLs are enabled.
When you set permissions for local users with ACLs, you create a role which defines the permissions for that role. You then assign that role to a user. If you assign multiple roles to the same user, any deny permission takes precedence, then write, then read.
The following example procedure uses the ha_cluster RHEL system role to create in an automated fashion a high availability cluster that implements ACLs to control access to the cluster configuration.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster with ACLs assigned ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # To use an ACL role permission reference, the reference must exist in CIB. ha_cluster_resource_primitives: - id: not-for-operator agent: 'ocf:pacemaker:Dummy' # ACLs must be enabled (using the enable-acl cluster property) in order to be effective. ha_cluster_cluster_properties: - attrs: - name: enable-acl value: 'true' ha_cluster_acls: acl_roles: - id: operator description: HA cluster operator permissions: - kind: write xpath: //crm_config//nvpair[@name='maintenance-mode'] - kind: deny reference: not-for-operator - id: administrator permissions: - kind: write xpath: /cib acl_users: - id: alice roles: - operator - administrator - id: bob roles: - administrator acl_groups: - id: admins roles: - administratorThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster resources>-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources. ha_cluster_cluster_properties: <cluster properties>- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_acls: <dictionary>- A dictionary of ACL role, user, and group values.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
You can use the ha_cluster RHEL system role to configure high availability clusters with STONITH Block Device (SBD) fencing in an automated fashion.
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a STONITH Block Device (SBD). This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
You can use the ha_cluster RHEL system role to configure SBD fencing in an automated fashion. With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster: A dictionary that defines options for one node only. You configure theha_clustervariable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.
This example procedure uses the ha_cluster_node_options variable in a playbook file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster variable in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster with SBD fencing ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: my_sbd_devices: # This variable is indirectly used by various variables of the ha_cluster RHEL system role. # Its purpose is to define SBD devices once so they do not need # to be repeated several times in the role variables. - /dev/disk/by-id/000001 - /dev/disk/by-id/000002 - /dev/disk/by-id/000003 ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_sbd_enabled: true ha_cluster_sbd_options: - name: delay-start value: 'no' - name: startmode value: always - name: timeout-action value: 'flush,reboot' - name: watchdog-timeout value: 30 ha_cluster_node_options: - node_name: node1 sbd_watchdog_modules: - iTCO_wdt sbd_watchdog_modules_blocklist: - ipmi_watchdog sbd_watchdog: /dev/watchdog1 sbd_devices: "{{ my_sbd_devices }}" - node_name: node2 sbd_watchdog_modules: - iTCO_wdt sbd_watchdog_modules_blocklist: - ipmi_watchdog sbd_watchdog: /dev/watchdog1 sbd_devices: "{{ my_sbd_devices }}" # Best practice for setting SBD timeouts: # watchdog-timeout * 2 = msgwait-timeout (set automatically) # msgwait-timeout * 1.2 = stonith-timeout ha_cluster_cluster_properties: - attrs: - name: stonith-timeout value: 72 ha_cluster_resource_primitives: - id: fence_sbd agent: 'stonith:fence_sbd' instance_attrs: - attrs: - name: devices value: "{{ my_sbd_devices | join(',') }}" - name: pcmk_delay_base value: 30The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_sbd_enabled: true- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: <sbd options>-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page on your system. ha_cluster_node_options: <node options>A variable that defines settings which vary from one cluster node to another. You can configure the following SBD and watchdog items:
-
sbd_watchdog_modules- Modules to be loaded, which create/dev/watchdog*devices. -
sbd_watchdog_modules_blocklist- Watchdog kernel modules to be unloaded and blocked. -
sbd_watchdog- Watchdog device to be used by SBD. -
sbd_devices- Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (/dev/disk/by-id/).
-
ha_cluster_cluster_properties: <cluster properties>- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: <cluster resources>-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
You can use the ha_cluster RHEL system role to configure high availability clusters with STONITH Block Device (SBD) fencing in an automated fashion.
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a SBD. This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster: A dictionary that defines options for one node only. You configure theha_clustervariable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.`
The following example procedure uses the ha_cluster system role to create a high availability cluster with SBD fencing. This example procedure uses the ha_cluster variable in an inventory file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster_node_options variable in a playbook file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_nodes_options variable.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Create an inventory file for your cluster that configures watchdog and SBD devices for each node by using the
ha_clustervariable, as in the following exampla;eall: hosts: node1: ha_cluster: sbd_watchdog_modules: - iTCO_wdt sbd_watchdog_modules_blocklist: - ipmi_watchdog sbd_watchdog: /dev/watchdog1 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000001 - /dev/disk/by-id/000003 node2: ha_cluster: sbd_watchdog_modules: - iTCO_wdt sbd_watchdog_modules_blocklist: - ipmi_watchdog sbd_watchdog: /dev/watchdog1 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000002 - /dev/disk/by-id/000003The SBD and watchdog settings specified in the example inventory include the following:
sbd_watchdog_modules-
Watchdog kernel modules to be loaded, which create
/dev/watchdog*devices. sbd_watchdog_modules_blocklist- Watchdog kernel modules to be unloaded and blocked.
sbd_watchdog- Watchdog device to be used by SBD.
sbd_devices-
Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (
/dev/disk/by-id/).
For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, as in the following example. Since you have specified the SBD and watchog variables in an inventory, you do not need to include them in the playbook.--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster with sbd fencing devices configured in an inventory file ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_sbd_enabled: true ha_cluster_sbd_options: - name: delay-start value: 'no' - name: startmode value: always - name: timeout-action value: 'flush,reboot' - name: watchdog-timeout value: 30 # Best practice for setting SBD timeouts: # watchdog-timeout * 2 = msgwait-timeout (set automatically) # msgwait-timeout * 1.2 = stonith-timeout ha_cluster_cluster_properties: - attrs: - name: stonith-timeout value: 72 ha_cluster_resource_primitives: - id: fence_sbd agent: 'stonith:fence_sbd' instance_attrs: - attrs: # taken from host_vars # this only works if all nodes have the same sbd_devices - name: devices value: "{{ ha_cluster.sbd_devices | join(',') }}" - name: pcmk_delay_base value: 30The settings specified in the example playbook include the following:
ha_cluster_cluster_name: cluster_name- The name of the cluster you are creating.
ha_cluster_hacluster_password: password-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_sbd_enabled: true- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: sbd options-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page on your system. ha_cluster_cluster_properties: cluster properties- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: cluster resources-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
You can use the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures utilization attributes to define a placement strategy.
A Pacemaker cluster allocates resources according to a resource allocation score. By default, if the resource allocation scores on all the nodes are equal, Pacemaker allocates the resource to the node with the smallest number of allocated resources. If the resources in your cluster use significantly different proportions of a node’s capacities, such as memory or I/O, the default behavior may not be the best strategy for balancing your system’s workload. In this case, you can customize an allocation strategy by configuring utilization attributes and placement strategies for nodes and resources.
For detailed information about configuring utilization attributes and placement strategies, see Configuring a node placement strategy.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster with utilization attributes ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_cluster_properties: - attrs: - name: placement-strategy value: utilization ha_cluster_node_options: - node_name: node1 utilization: - attrs: - name: utilization1 value: 1 - name: utilization2 value: 2 - node_name: node2 utilization: - attrs: - name: utilization1 value: 3 - name: utilization2 value: 4 ha_cluster_resource_primitives: - id: resource1 agent: 'ocf:pacemaker:Dummy' utilization: - attrs: - name: utilization1 value: 2 - name: utilization2 value: 3The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_cluster_properties: <cluster properties>-
List of sets of cluster properties for Pacemaker cluster-wide configuration. For utilization to have an effect, the
placement-strategyproperty must be set and its value must be different from the valuedefault. - `ha_cluster_node_options: <node options>
- A variable that defines various settings which vary from cluster node to cluster node.
ha_cluster_resource_primitives: <cluster resources>A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.
Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
You can use the ha_cluster RHEL system role to configure alerts for high availability clusters.
When a Pacemaker event occurs, such as a resource or a node failure or a configuration change, you may want to take some external action. For example, you may want to send an email message or log to a file or update a monitoring system.
You can configure your system to take an external action by using alert agents. These are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent through environment variables.
The ha_cluster RHEL system role configures the cluster to call external programs to handle alerts. However, you must provide these programs and distribute them to cluster nodes.
For more detailed information about alert agents, see Triggering scripts for cluster events.
This example procedure uses the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures a Pacemaker alert.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Configure a cluster with alerts ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_alerts: - id: alert1 path: /alert1/path description: Alert1 description instance_attrs: - attrs: - name: alert_attr1_name value: alert_attr1_value meta_attrs: - attrs: - name: alert_meta_attr1_name value: alert_meta_attr1_value recipients: - value: recipient_value id: recipient1 description: Recipient1 description instance_attrs: - attrs: - name: recipient_attr1_name value: recipient_attr1_value meta_attrs: - attrs: - name: recipient_meta_attr1_name value: recipient_meta_attr1_valueThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_alerts: <alert definitions>A variable that defines Pacemaker alerts.
-
id- ID of an alert. -
path- Path to the alert agent executable. -
description- Description of the alert. -
instance_attrs- List of sets of the alert’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs- List of sets of the alert’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
recipients- List of alert’s recipients. -
value- Value of a recipient. -
id- ID of the recipient. -
description- Description of the recipient. -
instance_attrs-List of sets of the recipient’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs- List of sets of the recipient’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
-
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. Use a quorum device for clusters with an even number of nodes.
With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.
For information about quorum devices, see Configuring quorum devices.
To configure a high availability cluster with a separate quorum device by using the ha_cluster RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.
11.15.1. Configuring a quorum device 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to configure a quorum device for high availability clusters. Note that you cannot run a quorum device on a cluster node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook-qdevice.yml, with the following content:--- - name: Configure a host with a quorum device hosts: nodeQ vars_files: - ~/vault.yml tasks: - name: Create a quorum device for the cluster ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_present: false ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_qnetd: present: trueThe settings specified in the example playbook include the following:
ha_cluster_cluster_present: false-
A variable that, if set to
false, determines that all cluster configuration will be removed from the target host. ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_qnetd: <quorum_device_options>A variable that configures a
qnetdhost.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.
Validate the playbook syntax:
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
11.15.2. Configuring a cluster to use a quorum device 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ha_cluster RHEL system role to configure a cluster with a quorum device.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Create a playbook file, for example,
~/playbook-cluster-qdevice.yml, with the following content:--- - name: Configure a cluster to use a quorum device hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create cluster that uses a quorum device ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_quorum: device: model: net model_options: - name: host value: nodeQ - name: algorithm value: lmsThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_quorum: <quorum_parameters>A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.
Validate the playbook syntax:
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints.
Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures node attributes.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - ~/vault.yml tasks: - name: Create a cluster that defines node attributes ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1A - name: attribute2 value: value2A - node_name: node2 attributes: - attrs: - name: attribute1 value: value1B - name: attribute2 value: value2Bha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_node_options: <node_settings>- A variable that defines various settings that vary from one cluster node to another.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
You can use the ha_cluster RHEL system role to configure an Apache HTTP server in a high availability cluster.
High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat Knowledgebase article Red Hat High Availability Add-On Documentation Guide.
The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
This example uses an APC power switch with a host name of zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 10.
- You have configured an Logical Volume Manager (LVM) logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster.
- You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server.
- Your system includes an APC power switch that will be used to fence the cluster nodes.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Create a high availability cluster hosts: z1.example.com z2.example.com vars_files: - ~/vault.yml tasks: - name: Configure active/passive Apache server in a high availability cluster ansible.builtin.include_role: name: redhat.rhel_system_roles.ha_cluster vars: ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_cluster_name: my_cluster ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_fence_agent_packages: - fence-agents-apc-snmp ha_cluster_resource_primitives: - id: myapc agent: stonith:fence_apc_snmp instance_attrs: - attrs: - name: ipaddr value: zapc.example.com - name: pcmk_host_map value: z1.example.com:1;z2.example.com:2 - name: login value: apc - name: passwd value: apc - id: my_lvm agent: ocf:heartbeat:LVM-activate instance_attrs: - attrs: - name: vgname value: my_vg - name: vg_access_mode value: system_id - id: my_fs agent: Filesystem instance_attrs: - attrs: - name: device value: /dev/my_vg/my_lv - name: directory value: /var/www - name: fstype value: xfs - id: VirtualIP agent: IPaddr2 instance_attrs: - attrs: - name: ip value: 198.51.100.3 - name: cidr_netmask value: 24 - id: Website agent: apache instance_attrs: - attrs: - name: configfile value: /etc/httpd/conf/httpd.conf - name: statusurl value: http://127.0.0.1/server-status ha_cluster_resource_groups: - id: apachegroup resource_ids: - my_lvm - my_fs - VirtualIP - WebsiteThe settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_fence_agent_packages: <fence_agent_packages>- A list of fence agent packages to install.
ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.ymlWhen you use the
apacheresource agent to manage Apache, it does not usesystemd. Because of this, you must edit thelogrotatescript supplied with Apache so that it does not usesystemctlto reload Apache.Remove the following line in the
/etc/logrotate.d/httpdfile on each node in the cluster.# /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || trueReplace the line you removed with the following three lines, specifying
/var/run/httpd-website.pidas the PID file path where website is the name of the Apache resource. In this example, the Apache resource name isWebsite./usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
Verification
From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node,
z1.example.com.If you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration.[root@z1 ~]# pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.comOnce the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".HelloTo test whether the resource group running on
z1.example.comfails over to nodez2.example.com, put nodez1.example.cominstandbymode, after which the node will no longer be able to host resources.[root@z1 ~]# pcs node standby z1.example.comAfter putting node
z1instandbymode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running onz2.[root@z1 ~]# pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.comThe website at the defined IP address should still display, without interruption.
To remove
z1fromstandbymode, enter the following command.[root@z1 ~]# pcs node unstandby z1.example.comNoteRemoving a node from
standbymode does not in itself cause the resources to fail back over to that node. This will depend on theresource-stickinessvalue for the resources. For information about theresource-stickinessmeta attribute, see Configuring a resource to prefer its current node.