Este conteúdo não está disponível no idioma selecionado.

Chapter 11. Configuring a high-availability cluster by using the ha_cluster system role


With the ha_cluster system role, you can configure and manage a system-roles cluster that uses the Pacemaker high availability cluster resource manager.

11.1. Specifying an inventory for the ha_cluster RHEL system role

When configuring an HA cluster using the ha_cluster system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.

For each node in an inventory, you can optionally specify the following items:

  • node_name - the name of a node in a cluster.
  • pcs_address - an address used by pcs to communicate with the node. It can be a name, FQDN or an IP address and it can include a port number.
  • corosync_addresses - list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes.

The following example shows an inventory with targets node1 and node2. node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file.

all:
  hosts:
    node1:
      ha_cluster:
        node_name: node-A
        pcs_address: node1-address
        corosync_addresses:
          - 192.168.1.11
          - 192.168.2.11
    node2:
      ha_cluster:
        node_name: node-B
        pcs_address: node2-address:2224
        corosync_addresses:
          - 192.168.1.12
          - 192.168.2.12
Copy to Clipboard Toggle word wrap

You can optionally configure watchdog and SBD devices for each node in an inventory. All SBD devices must be shared to and accessible from all nodes. Watchdog devices can be different for each node as well. For an example procedure that configures SBD node fencing in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.

11.2. Creating pcsd TLS certificates and key files for a high availability cluster

The connection between cluster nodes is secured using Transport Layer Security (TLS) encryption. By default, the pcsd daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd.

You can use the ha_cluster RHEL system role to create TLS certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster RHEL system role uses the certificate RHEL system role internally to manage TLS certificates.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create TLS certificates and key files in a high availability cluster
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_pcsd_certificates:
              - name: FILENAME
                common_name: "{{ ansible_hostname }}"
                ca: self-sign
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_pcsd_certificates: <certificate_properties>
    A variable that creates a self-signed pcsd certificate and private key files in /var/lib/pcsd. In this example, the pcsd certificate has the file name FILENAME.crt and the key file is named FILENAME.key.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

11.3. Configuring a high availability cluster running no resources

You can use the ha_cluster system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis. The following example procedure configures a basic two-node cluster with no fencing configured using the minimum required parameters.

Warning

The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster with minimum required parameters and no fencing
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

11.4. Configuring a high availability cluster with fencing and resources

The specific components of a cluster configuration depend on your individual needs, which vary between sites. The following example procedure shows the formats for configuring different cluster components by using the ha_cluster RHEL system role. The configured cluster includes a fencing device, cluster resources, resource groups, and a cloned resource.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster with fencing and resources
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_resource_primitives:
              - id: xvm-fencing
                agent: 'stonith:fence_xvm'
                instance_attrs:
                  - attrs:
                      - name: pcmk_host_list
                        value: node1 node2
              - id: simple-resource
                agent: 'ocf:pacemaker:Dummy'
              - id: resource-with-options
                agent: 'ocf:pacemaker:Dummy'
                instance_attrs:
                  - attrs:
                      - name: fake
                        value: fake-value
                      - name: passwd
                        value: passwd-value
                meta_attrs:
                  - attrs:
                      - name: target-role
                        value: Started
                      - name: is-managed
                        value: 'true'
                operations:
                  - action: start
                    attrs:
                      - name: timeout
                        value: '30s'
                  - action: monitor
                    attrs:
                      - name: timeout
                        value: '5'
                      - name: interval
                        value: '1min'
              - id: dummy-1
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-2
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-3
                agent: 'ocf:pacemaker:Dummy'
              - id: simple-clone
                agent: 'ocf:pacemaker:Dummy'
              - id: clone-with-options
                agent: 'ocf:pacemaker:Dummy'
            ha_cluster_resource_groups:
              - id: simple-group
                resource_ids:
                  - dummy-1
                  - dummy-2
                meta_attrs:
                  - attrs:
                      - name: target-role
                        value: Started
                      - name: is-managed
                        value: 'true'
              - id: cloned-group
                resource_ids:
                  - dummy-3
            ha_cluster_resource_clones:
              - resource_id: simple-clone
              - resource_id: clone-with-options
                promotable: yes
                id: custom-clone-id
                meta_attrs:
                  - attrs:
                      - name: clone-max
                        value: '2'
                      - name: clone-node-max
                        value: '1'
              - resource_id: cloned-group
                promotable: yes
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_resource_primitives: <cluster_resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
    ha_cluster_resource_groups: <resource_groups>
    A list of resource group definitions configured by the ha_cluster RHEL system role.
    ha_cluster_resource_clones: <resource_clones>
    A list of resource clone definitions configured by the ha_cluster RHEL system role.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster.

For information about changing the default value of a resource option, see link: Changing the default value of a resource option. For information about global resource operation defaults, see Configuring global resource operation defaults.

The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines resource and resource operation defaults.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster with fencing and resource operation defaults
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            # Set a different resource-stickiness value during
            # and outside work hours. This allows resources to
            # automatically move back to their most
            # preferred hosts, but at a time that
            # does not interfere with business activities.
            ha_cluster_resource_defaults:
              meta_attrs:
                - id: core-hours
                  rule: date-spec hours=9-16 weekdays=1-5
                  score: 2
                  attrs:
                    - name: resource-stickiness
                      value: INFINITY
                - id: after-hours
                  score: 1
                  attrs:
                    - name: resource-stickiness
                      value: 0
            # Default the timeout on all 10-second-interval
            # monitor actions on IPaddr2 resources to 8 seconds.
            ha_cluster_resource_operation_defaults:
              meta_attrs:
                - rule: resource ::IPaddr2 and op monitor interval=10s
                  score: INFINITY
                  attrs:
                    - name: timeout
                      value: 8s
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_resource_defaults: <resource_defaults>
    A variable that defines sets of resource defaults.
    ha_cluster_resource_operation_defaults: <resource_operation_defaults>
    A variable that defines sets of resource operation defaults.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

11.6. Configuring a high availability cluster with fencing levels

When you configure multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node. For information about fencing levels, see Configuring fencing levels.

The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines fencing levels.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      fence1_password: <fence1_password>
      fence2_password: <fence2_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml. This example playbook file configures a cluster running the firewalld and selinux services.

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster that defines fencing levels
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_resource_primitives:
              - id: apc1
                agent: 'stonith:fence_apc_snmp'
                instance_attrs:
                  - attrs:
                      - name: ip
                        value: apc1.example.com
                      - name: username
                        value: user
                      - name: password
                        value: "{{ fence1_password }}"
                      - name: pcmk_host_map
                        value: node1:1;node2:2
              - id: apc2
                agent: 'stonith:fence_apc_snmp'
                instance_attrs:
                  - attrs:
                      - name: ip
                        value: apc2.example.com
                      - name: username
                        value: user
                      - name: password
                        value: "{{ fence2_password }}"
                      - name: pcmk_host_map
                        value: node1:1;node2:2
            # Nodes have redundant power supplies, apc1 and apc2. Cluster must
            # ensure that when attempting to reboot a node, both power
            # supplies # are turned off before either power supply is turned
            # back on.
            ha_cluster_stonith_levels:
              - level: 1
                target: node1
                resource_ids:
                  - apc1
                  - apc2
              - level: 1
                target: node2
                resource_ids:
                  - apc1
                  - apc2
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_resource_primitives: <cluster_resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
    ha_cluster_stonith_levels: <stonith_levels>
    A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints.

You can define the following categories of resource constraints:

The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster with resource constraints
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            # In order to use constraints, we need resources
            # the constraints will apply to.
            ha_cluster_resource_primitives:
              - id: xvm-fencing
                agent: 'stonith:fence_xvm'
                instance_attrs:
                  - attrs:
                      - name: pcmk_host_list
                        value: node1 node2
              - id: dummy-1
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-2
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-3
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-4
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-5
                agent: 'ocf:pacemaker:Dummy'
              - id: dummy-6
                agent: 'ocf:pacemaker:Dummy'
            # location constraints
            ha_cluster_constraints_location:
              # resource ID and node name
              - resource:
                  id: dummy-1
                node: node1
                options:
                  - name: score
                    value: 20
              # resource pattern and node name
              - resource:
                  pattern: dummy-\d+
                node: node1
                options:
                  - name: score
                    value: 10
              # resource ID and rule
              - resource:
                  id: dummy-2
                rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28'
              # resource pattern and rule
              - resource:
                  pattern: dummy-\d+
                rule: node-type eq weekend and date-spec weekdays=6-7
            # colocation constraints
            ha_cluster_constraints_colocation:
              # simple constraint
              - resource_leader:
                  id: dummy-3
                resource_follower:
                  id: dummy-4
                options:
                  - name: score
                    value: -5
              # set constraint
              - resource_sets:
                  - resource_ids:
                      - dummy-1
                      - dummy-2
                  - resource_ids:
                      - dummy-5
                      - dummy-6
                    options:
                      - name: sequential
                        value: "false"
                options:
                  - name: score
                    value: 20
            # order constraints
            ha_cluster_constraints_order:
              # simple constraint
              - resource_first:
                  id: dummy-1
                resource_then:
                  id: dummy-6
                options:
                  - name: symmetrical
                    value: "false"
              # set constraint
              - resource_sets:
                  - resource_ids:
                      - dummy-1
                      - dummy-2
                    options:
                      - name: require-all
                        value: "false"
                      - name: sequential
                        value: "false"
                  - resource_ids:
                      - dummy-3
                  - resource_ids:
                      - dummy-4
                      - dummy-5
                    options:
                      - name: sequential
                        value: "false"
            # ticket constraints
            ha_cluster_constraints_ticket:
              # simple constraint
              - resource:
                  id: dummy-1
                ticket: ticket1
                options:
                  - name: loss-policy
                    value: stop
              # set constraint
              - resource_sets:
                  - resource_ids:
                      - dummy-3
                      - dummy-4
                      - dummy-5
                ticket: ticket2
                options:
                  - name: loss-policy
                    value: fence
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_resource_primitives: <cluster_resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
    ha_cluster_constraints_location: <location_constraints>
    A variable that defines resource location constraints.
    ha_cluster_constraints_colocation: <colocation_constraints>
    A variable that defines resource colocation constraints.
    ha_cluster_constraints_order: <order_constraints>
    A variable that defines resource order constraints.
    ha_cluster_constraints_ticket: <ticket_constraints>
    A variable that defines Booth ticket constraints.
  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

The corosync.conf file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf file. In general, you should not edit the corosync.conf file directly. You can, however, configure Corosync values by using the ha_cluster RHEL system role.

The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures Corosync values.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster that configures Corosync values
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_transport:
              type: knet
              options:
                - name: ip_version
                  value: ipv4-6
              links:
                -
                  - name: linknumber
                    value: 1
                  - name: link_priority
                    value: 5
                -
                  - name: linknumber
                    value: 0
                  - name: link_priority
                    value: 10
              compression:
                - name: level
                  value: 5
                - name: model
                  value: zlib
              crypto:
                - name: cipher
                  value: none
                - name: hash
                  value: none
            ha_cluster_totem:
              options:
                - name: block_unlisted_ips
                  value: 'yes'
                - name: send_join
                  value: 0
            ha_cluster_quorum:
              options:
                - name: auto_tie_breaker
                  value: 1
                - name: wait_for_all
                  value: 1
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_transport: <transport_method>
    A variable that sets the cluster transport method.
    ha_cluster_totem: <totem_options>
    A variable that configures Corosync totem options.
    ha_cluster_quorum: <quorum_options>
    A variable that configures cluster quorum options.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

11.9. Exporting a cluster configuration to create a RHEL system role playbook

You can use the ha_cluster RHEL system role to export the Corosync configuration of a cluster into ha_cluster variables that can be fed back to the role to recreate the same cluster. If you did not use ha_cluster to create your cluster, or if you do not have access to the original playbook for the cluster, you can use this feature to build a new playbook for creating the cluster.

When you export a cluster’s configuration by using the ha_cluster RHEL system role, not all of the variables are exported. You must manually modify the configuration to account for these variables.

The following variables are present in the export:

  • ha_cluster_cluster_present
  • ha_cluster_start_on_boot
  • ha_cluster_cluster_name
  • ha_cluster_transport
  • ha_cluster_totem
  • ha_cluster_quorum
  • ha_cluster_node_options - Only the node_name, corosync_addresses and pcs_address options are present.

The following variables are not present in the export:

  • ha_cluster_hacluster_password - This is a mandatory variable for the role but it cannot be extracted from existing clusters.
  • ha_cluster_corosync_key_src, ha_cluster_pacemaker_key_src and ha_cluster_fence_virt_key_src - These variables should contain paths to files with Corosync and Pacemaker keys. Since the keys themselves are not exported, these variables are not present in the export either. These keys should be unique for each cluster.
  • ha_cluster_regenerate_keys - You should decide whether to use existing keys or to generate new ones.

To export the current cluster configuration, run the ha_cluster RHEL system role and set ha_cluster_export_configuration: true. This triggers the export once the role finishes configuring a cluster or a qnetd host and stores it in the ha_cluster_facts variable.

By default, ha_cluster_cluster_present is set to true and ha_cluster_qnetd.present is set to false. These settings will reconfigure your cluster on the specified hosts, remove qnetd configuration from the specified hosts, and then export the configuration. To trigger the export without modifying an existing configuration, run the role with the following settings:

- hosts: node1
  vars:
    ha_cluster_cluster_present: null
    ha_cluster_qnetd: null
    ha_cluster_export_configuration: true

  roles:
    - linux-system-roles.ha_cluster
Copy to Clipboard Toggle word wrap

The following procedure:

  • Exports the cluster configuration from cluster node node1 into the ha_cluster_facts variable.
  • Sets the ha_cluster_cluster_present and ha_cluster_qnetd variables to null to ensure that running this playbook does not modify the existing cluster configuration.
  • Uses the Ansible debug module to display the content of ha_cluster_facts.
  • Saves the contents of ha_cluster_facts to a file on the control node in a YAML format for you to write a playbook around it.

Prerequisites

Procedure

  1. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Export high availability cluster configuration
      hosts: node1
      Tasks:
        - name: Export configuration that does not modify existing cluster
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_present: null
            ha_cluster_qnetd: null
            ha_cluster_export_configuration: true
        - name: Print ha_cluster_info_result variable
          ansible.builtin.debug:
            var: ha_cluster_facts
        - name: Save current cluster configuration to a file
          delegate_to: localhost
          ansible.builtin.copy:
            content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }}"
            dest: /path/to/file
            mode: "0640"
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    hosts: node1
    A node containing the cluster information to export.
    ha_cluster_cluster_present: null
    Setting to indicate that the cluster configuration will not be changed on the specified host.
    ha_cluster_qnetd: null
    Setting to indicate that the qnetd host configuration will not be changed on the specified host.
    ha_cluster_export_configuration: true
    A variable that determines whether to export the current cluster configuration and store it in the ha_cluster_facts variable, which is generated by the ha_cluster_info module.
    ha_cluster_facts
    A variable that contains the exported cluster configuration.
    delegate_to: localhost
    Specifies the control node as the location for the exported configuration file.
    content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }"}, dest: /path/to/file, mode: "0640"

    Copies the configuration file in a YAML format to /path/to/file, setting the file permissions to 0640.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  2. Write a playbook for your system using the variables you exported to /path/to/file on the control node.

    You must add the ha_cluster_hacluster_password variable, as it is a required variable but is not present in the export. Optionally, add the ha_cluster_corosync_key_src, ha_cluster_pacemaker_key_src, ha_cluster_fence_virt_key_src, and ha_cluster_regenerate_keys variables if your system requires them. These variables are never exported.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook ~/playbook.yml
    Copy to Clipboard Toggle word wrap

The pcs administration account for a cluster is hacluster. Using access control lists (ACLs), you can grant permission for specific local users other than user hacluster to manage a Pacemaker cluster. A common use case for this feature is to restrict unauthorized users from accessing business-sensitive information.

By default, ACLs are not enabled. Consequently, any member of the group haclient on all nodes has full local read and write access to the cluster configuratioan. Users who are not members of haclient have no access. When ACLs are enabled, however, even users who are members of the haclient group have access only to what has been granted to that user by the ACLs. The root and hacluster user accounts always have full access to the cluster configuration, even when ACLs are enabled.

When you set permissions for local users with ACLs, you create a role which defines the permissions for that role. You then assign that role to a user. If you assign multiple roles to the same user, any deny permission takes precedence, then write, then read.

The following example procedure uses the ha_cluster RHEL system role to create in an automated fashion a high availability cluster that implements ACLs to control access to the cluster configuration.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster with ACLs assigned
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
              ha_cluster_cluster_name: my-new-cluster
              ha_cluster_hacluster_password: "{{ cluster_password }}"
              ha_cluster_manage_firewall: true
              ha_cluster_manage_selinux: true
              # To use an ACL role permission reference, the reference must exist in CIB.
              ha_cluster_resource_primitives:
                - id: not-for-operator
                  agent: 'ocf:pacemaker:Dummy'
              # ACLs must be enabled (using the enable-acl cluster property) in order to be effective.
              ha_cluster_cluster_properties:
                - attrs:
                    - name: enable-acl
                      value: 'true'
              ha_cluster_acls:
                acl_roles:
                  - id: operator
                    description: HA cluster operator
                    permissions:
                      - kind: write
                        xpath: //crm_config//nvpair[@name='maintenance-mode']
                      - kind: deny
                        reference: not-for-operator
                  - id: administrator
                    permissions:
                      - kind: write
                        xpath: /cib
                acl_users:
                  - id: alice
                    roles:
                      - operator
                      - administrator
                  - id: bob
                    roles:
                      - administrator
                acl_groups:
                  - id: admins
                    roles:
                      - administrator
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_resource_primitives: <cluster resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing resources.
    ha_cluster_cluster_properties: <cluster properties>
    A list of sets of cluster properties for Pacemaker cluster-wide configuration.
    ha_cluster_acls: <dictionary>
    A dictionary of ACL role, user, and group values.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a STONITH Block Device (SBD). This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.

You can use the ha_cluster RHEL system role to configure SBD fencing in an automated fashion. With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:

  • ha_cluster_node_options: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node.
  • ha_cluster: A dictionary that defines options for one node only. You configure the ha_cluster variable in an inventory file. To set different values for each node, you define the variable separately for each node.

If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.

This example procedure uses the ha_cluster_node_options variable in a playbook file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster variable in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster with SBD fencing
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            my_sbd_devices:
              # This variable is indirectly used by various variables of the ha_cluster RHEL system role.
              # Its purpose is to define SBD devices once so they do not need
              # to be repeated several times in the role variables.
              - /dev/disk/by-id/000001
              - /dev/disk/by-id/000002
              - /dev/disk/by-id/000003
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_sbd_enabled: true
            ha_cluster_sbd_options:
              - name: delay-start
                value: 'no'
              - name: startmode
                value: always
              - name: timeout-action
                value: 'flush,reboot'
              - name: watchdog-timeout
                value: 30
            ha_cluster_node_options:
              - node_name: node1
                sbd_watchdog_modules:
                  - iTCO_wdt
                sbd_watchdog_modules_blocklist:
                  - ipmi_watchdog
                sbd_watchdog: /dev/watchdog1
                sbd_devices: "{{ my_sbd_devices }}"
              - node_name: node2
                sbd_watchdog_modules:
                  - iTCO_wdt
                sbd_watchdog_modules_blocklist:
                  - ipmi_watchdog
                sbd_watchdog: /dev/watchdog1
                sbd_devices: "{{ my_sbd_devices }}"
            # Best practice for setting SBD timeouts:
            # watchdog-timeout * 2 = msgwait-timeout (set automatically)
            # msgwait-timeout * 1.2 = stonith-timeout
            ha_cluster_cluster_properties:
              - attrs:
                  - name: stonith-timeout
                    value: 72
            ha_cluster_resource_primitives:
              - id: fence_sbd
                agent: 'stonith:fence_sbd'
                instance_attrs:
                  - attrs:
                      - name: devices
                        value: "{{ my_sbd_devices | join(',') }}"
                      - name: pcmk_delay_base
                        value: 30
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_sbd_enabled: true
    A variable that determines whether the cluster can use the SBD node fencing mechanism.
    ha_cluster_sbd_options: <sbd options>
    A list of name-value dictionaries specifying SBD options. For information about these options, see the Configuration via environment section of the sbd(8) man page on your system.
    ha_cluster_node_options: <node options>

    A variable that defines settings which vary from one cluster node to another. You can configure the following SBD and watchdog items:

    • sbd_watchdog_modules - Modules to be loaded, which create /dev/watchdog* devices.
    • sbd_watchdog_modules_blocklist - Watchdog kernel modules to be unloaded and blocked.
    • sbd_watchdog - Watchdog device to be used by SBD.
    • sbd_devices - Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (/dev/disk/by-id/).
    ha_cluster_cluster_properties: <cluster properties>
    A list of sets of cluster properties for Pacemaker cluster-wide configuration.
    ha_cluster_resource_primitives: <cluster resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing resources.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using a STONITH Block Device (SBD). This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.

You can use the ha_cluster RHEL system role to configure SBD fencing in an automated fashion. With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:

  • ha_cluster_node_options: This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node.
  • ha_cluster: A dictionary that defines options for one node only. You configure the ha_cluster variable in an inventory file. To set different values for each node, you define the variable separately for each node.

If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.

If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.`

The following example procedure uses the ha_cluster system role to create a high availability cluster with SBD fencing. This example procedure uses the ha_cluster variable in an inventory file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster_node_options variable in a playbook file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_nodes_options variable.

Warning

The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Create an inventory file for your cluster that configures watchdog and SBD devices for each node by using the ha_cluster variable, as in the following exampla;e

    all:
      hosts:
        node1:
          ha_cluster:
            sbd_watchdog_modules:
              - iTCO_wdt
            sbd_watchdog_modules_blocklist:
              - ipmi_watchdog
            sbd_watchdog: /dev/watchdog1
            sbd_devices:
              - /dev/disk/by-id/000001
              - /dev/disk/by-id/000001
              - /dev/disk/by-id/000003
        node2:
          ha_cluster:
            sbd_watchdog_modules:
              - iTCO_wdt
            sbd_watchdog_modules_blocklist:
              - ipmi_watchdog
            sbd_watchdog: /dev/watchdog1
            sbd_devices:
              - /dev/disk/by-id/000001
              - /dev/disk/by-id/000002
              - /dev/disk/by-id/000003
    Copy to Clipboard Toggle word wrap

    The SBD and watchdog settings specified in the example inventory include the following:

    sbd_watchdog_modules
    Watchdog kernel modules to be loaded, which create /dev/watchdog* devices.
    sbd_watchdog_modules_blocklist
    Watchdog kernel modules to be unloaded and blocked.
    sbd_watchdog
    Watchdog device to be used by SBD.
    sbd_devices
    Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (/dev/disk/by-id/).

    For general information about creating an inventory file, see Preparing a control node on RHEL 10.

  2. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  3. Create a playbook file, for example, ~/playbook.yml, as in the following example. Since you have specified the SBD and watchog variables in an inventory, you do not need to include them in the playbook.

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster with sbd fencing devices configured in an inventory file
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
              ha_cluster_cluster_name: my-new-cluster
              ha_cluster_hacluster_password: "{{ cluster_password }}"
              ha_cluster_manage_firewall: true
              ha_cluster_manage_selinux: true
              ha_cluster_sbd_enabled: true
              ha_cluster_sbd_options:
                - name: delay-start
                  value: 'no'
                - name: startmode
                  value: always
                - name: timeout-action
                  value: 'flush,reboot'
                - name: watchdog-timeout
                  value: 30
              # Best practice for setting SBD timeouts:
              # watchdog-timeout * 2 = msgwait-timeout (set automatically)
              # msgwait-timeout * 1.2 = stonith-timeout
              ha_cluster_cluster_properties:
                - attrs:
                    - name: stonith-timeout
                      value: 72
              ha_cluster_resource_primitives:
                - id: fence_sbd
                  agent: 'stonith:fence_sbd'
                  instance_attrs:
                    - attrs:
                        # taken from host_vars
                        # this only works if all nodes have the same sbd_devices
                        - name: devices
                          value: "{{ ha_cluster.sbd_devices | join(',') }}"
                        - name: pcmk_delay_base
                          value: 30
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: cluster_name
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: password
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_sbd_enabled: true
    A variable that determines whether the cluster can use the SBD node fencing mechanism.
    ha_cluster_sbd_options: sbd options
    A list of name-value dictionaries specifying SBD options. For information about these options, see the Configuration via environment section of the sbd(8) man page on your system.
    ha_cluster_cluster_properties: cluster properties
    A list of sets of cluster properties for Pacemaker cluster-wide configuration.
    ha_cluster_resource_primitives: cluster resources
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing resources.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  4. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  5. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

A Pacemaker cluster allocates resources according to a resource allocation score. By default, if the resource allocation scores on all the nodes are equal, Pacemaker allocates the resource to the node with the smallest number of allocated resources. If the resources in your cluster use significantly different proportions of a node’s capacities, such as memory or I/O, the default behavior may not be the best strategy for balancing your system’s workload. In this case, you can customize an allocation strategy by configuring utilization attributes and placement strategies for nodes and resources.

For detailed information about configuring utilization attributes and placement strategies, see Configuring a node placement strategy.

This example procedure uses the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures utilization attributes to define a placement strategy.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster with utilization attributes
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
              ha_cluster_cluster_name: my-new-cluster
              ha_cluster_hacluster_password: "{{ cluster_password }}"
              ha_cluster_manage_firewall: true
              ha_cluster_manage_selinux: true
              ha_cluster_cluster_properties:
                - attrs:
                    - name: placement-strategy
                      value: utilization
              ha_cluster_node_options:
                - node_name: node1
                  utilization:
                    - attrs:
                        - name: utilization1
                          value: 1
                        - name: utilization2
                          value: 2
                - node_name: node2
                  utilization:
                    - attrs:
                        - name: utilization1
                          value: 3
                        - name: utilization2
                          value: 4
              ha_cluster_resource_primitives:
                - id: resource1
                  agent: 'ocf:pacemaker:Dummy'
                  utilization:
                    - attrs:
                        - name: utilization1
                          value: 2
                        - name: utilization2
                          value: 3
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_cluster_properties: <cluster properties>
    List of sets of cluster properties for Pacemaker cluster-wide configuration. For utilization to have an effect, the placement-strategy property must be set and its value must be different from the value default.
    `ha_cluster_node_options: <node options>
    A variable that defines various settings which vary from cluster node to cluster node.
    ha_cluster_resource_primitives: <cluster resources>

    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing resources.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

When a Pacemaker event occurs, such as a resource or a node failure or a configuration change, you may want to take some external action. For example, you may want to send an email message or log to a file or update a monitoring system.

You can configure your system to take an external action by using alert agents. These are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent through environment variables.

Note

The ha_cluster RHEL system role configures the cluster to call external programs to handle alerts. However, you must provide these programs and distribute them to cluster nodes.

For more detailed information about alert agents, see Triggering scripts for cluster events.

This example procedure uses the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures a Pacemaker alert.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure a cluster with alerts
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
              ha_cluster_cluster_name: my-new-cluster
              ha_cluster_hacluster_password: "{{ cluster_password }}"
              ha_cluster_manage_firewall: true
              ha_cluster_manage_selinux: true
              ha_cluster_alerts:
                - id: alert1
                  path: /alert1/path
                  description: Alert1 description
                  instance_attrs:
                    - attrs:
                        - name: alert_attr1_name
                          value: alert_attr1_value
                  meta_attrs:
                    - attrs:
                        - name: alert_meta_attr1_name
                          value: alert_meta_attr1_value
                  recipients:
                    - value: recipient_value
                      id: recipient1
                      description: Recipient1 description
                      instance_attrs:
                        - attrs:
                            - name: recipient_attr1_name
                              value: recipient_attr1_value
                      meta_attrs:
                        - attrs:
                            - name: recipient_meta_attr1_name
                              value: recipient_meta_attr1_value
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_alerts: <alert definitions>

    A variable that defines Pacemaker alerts.

    • id - ID of an alert.
    • path - Path to the alert agent executable.
    • description- Description of the alert.
    • instance_attrs - List of sets of the alert’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
    • meta_attrs - List of sets of the alert’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
    • recipients - List of alert’s recipients.
    • value- Value of a recipient.
    • id - ID of the recipient.
    • description - Description of the recipient.
    • instance_attrs -List of sets of the recipient’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
    • meta_attrs - List of sets of the recipient’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.

For information about quorum devices, see Configuring quorum devices.

To configure a high availability cluster with a separate quorum device by using the ha_cluster RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.

11.15.1. Configuring a quorum device

To configure a quorum device using the ha_cluster RHEL system role, follow the steps in this example procedure. Note that you cannot run a quorum device on a cluster node.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook-qdevice.yml, with the following content:

    ---
    - name: Configure a host with a quorum device
      hosts: nodeQ
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create a quorum device for the cluster
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_present: false
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_qnetd:
              present: true
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_present: false
    A variable that, if set to false, determines that all cluster configuration will be removed from the target host.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_qnetd: <quorum_device_options>

    A variable that configures a qnetd host.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
    Copy to Clipboard Toggle word wrap

11.15.2. Configuring a cluster to use a quorum device

To configure a cluster to use a quorum device, follow the steps in this example procedure.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Create a playbook file, for example, ~/playbook-cluster-qdevice.yml, with the following content:

    ---
    - name: Configure a cluster to use a quorum device
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create cluster that uses a quorum device
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_quorum:
              device:
                model: net
                model_options:
                  - name: host
                    value: nodeQ
                  - name: algorithm
                    value: lms
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_quorum: <quorum_parameters>

    A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
    Copy to Clipboard Toggle word wrap

You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints.

Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules.

The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures node attributes.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: node1 node2
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Create a cluster that defines node attributes
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_cluster_name: my-new-cluster
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_node_options:
              - node_name: node1
                attributes:
                  - attrs:
                      - name: attribute1
                        value: value1A
                      - name: attribute2
                        value: value2A
              - node_name: node2
                attributes:
                  - attrs:
                      - name: attribute1
                        value: value1B
                      - name: attribute2
                        value: value2B
    Copy to Clipboard Toggle word wrap
    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_node_options: <node_settings>
    A variable that defines various settings that vary from one cluster node to another.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat Knowledgebase article Red Hat High Availability Add-On Documentation Guide.

The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.

This example uses an APC power switch with a host name of zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example.

Warning

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
      Copy to Clipboard Toggle word wrap
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
      Copy to Clipboard Toggle word wrap
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: z1.example.com z2.example.com
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure active/passive Apache server in a high availability cluster
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_cluster_name: my_cluster
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_fence_agent_packages:
              - fence-agents-apc-snmp
            ha_cluster_resource_primitives:
              - id: myapc
                agent: stonith:fence_apc_snmp
                instance_attrs:
                  - attrs:
                      - name: ipaddr
                        value: zapc.example.com
                      - name: pcmk_host_map
                        value: z1.example.com:1;z2.example.com:2
                      - name: login
                        value: apc
                      - name: passwd
                        value: apc
              - id: my_lvm
                agent: ocf:heartbeat:LVM-activate
                instance_attrs:
                  - attrs:
                      - name: vgname
                        value: my_vg
                      - name: vg_access_mode
                        value: system_id
              - id: my_fs
                agent: Filesystem
                instance_attrs:
                  - attrs:
                      - name: device
                        value: /dev/my_vg/my_lv
                      - name: directory
                        value: /var/www
                      - name: fstype
                        value: xfs
              - id: VirtualIP
                agent: IPaddr2
                instance_attrs:
                  - attrs:
                      - name: ip
                        value: 198.51.100.3
                      - name: cidr_netmask
                        value: 24
              - id: Website
                agent: apache
                instance_attrs:
                  - attrs:
                      - name: configfile
                        value: /etc/httpd/conf/httpd.conf
                      - name: statusurl
                        value: http://127.0.0.1/server-status
            ha_cluster_resource_groups:
              - id: apachegroup
                resource_ids:
                  - my_lvm
                  - my_fs
                  - VirtualIP
                  - Website
    Copy to Clipboard Toggle word wrap

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_fence_agent_packages: <fence_agent_packages>
    A list of fence agent packages to install.
    ha_cluster_resource_primitives: <cluster_resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
    ha_cluster_resource_groups: <resource_groups>
    A list of resource group definitions configured by the ha_cluster RHEL system role.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
    Copy to Clipboard Toggle word wrap
  5. When you use the apache resource agent to manage Apache, it does not use systemd. Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache.

    Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster.

    # /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
    Copy to Clipboard Toggle word wrap

    Replace the line you removed with the following three lines, specifying /var/run/httpd-website.pid as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name is Website.

    /usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null &&
    /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null &&
    /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
    Copy to Clipboard Toggle word wrap

Verification

  1. From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node, z1.example.com.

    If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 16:38:51 2013
    Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Online: [ z1.example.com z2.example.com ]
    
    Full list of resources:
     myapc  (stonith:fence_apc_snmp):       Started z1.example.com
     Resource Group: apachegroup
         my_lvm     (ocf::heartbeat:LVM-activate):   Started z1.example.com
         my_fs      (ocf::heartbeat:Filesystem):    Started z1.example.com
         VirtualIP  (ocf::heartbeat:IPaddr2):       Started z1.example.com
         Website    (ocf::heartbeat:apache):        Started z1.example.com
    Copy to Clipboard Toggle word wrap
  2. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello".

    Hello
    Copy to Clipboard Toggle word wrap
  3. To test whether the resource group running on z1.example.com fails over to node z2.example.com, put node z1.example.com in standby mode, after which the node will no longer be able to host resources.

    [root@z1 ~]# pcs node standby z1.example.com
    Copy to Clipboard Toggle word wrap
  4. After putting node z1 in standby mode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running on z2.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 17:16:17 2013
    Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Node z1.example.com (1): standby
    Online: [ z2.example.com ]
    
    Full list of resources:
    
     myapc  (stonith:fence_apc_snmp):       Started z1.example.com
     Resource Group: apachegroup
         my_lvm     (ocf::heartbeat:LVM-activate):  Started z2.example.com
         my_fs      (ocf::heartbeat:Filesystem):    Started z2.example.com
         VirtualIP  (ocf::heartbeat:IPaddr2):       Started z2.example.com
         Website    (ocf::heartbeat:apache):        Started z2.example.com
    Copy to Clipboard Toggle word wrap

    The web site at the defined IP address should still display, without interruption.

  5. To remove z1 from standby mode, enter the following command.

    [root@z1 ~]# pcs node unstandby z1.example.com
    Copy to Clipboard Toggle word wrap
    Note

    Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information about the resource-stickiness meta attribute, see Configuring a resource to prefer its current node.

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat