11.17. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster RHEL system role


You can use the ha_cluster RHEL system role to configure an Apache HTTP server in a high availability cluster.

High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat Knowledgebase article Red Hat High Availability Add-On Documentation Guide.

The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.

This example uses an APC power switch with a host name of zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example.

警告

The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create ~/vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      cluster_password: <cluster_password>
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example, ~/playbook.yml, with the following content:

    ---
    - name: Create a high availability cluster
      hosts: z1.example.com z2.example.com
      vars_files:
        - ~/vault.yml
      tasks:
        - name: Configure active/passive Apache server in a high availability cluster
          ansible.builtin.include_role:
            name: redhat.rhel_system_roles.ha_cluster
          vars:
            ha_cluster_hacluster_password: "{{ cluster_password }}"
            ha_cluster_cluster_name: my_cluster
            ha_cluster_manage_firewall: true
            ha_cluster_manage_selinux: true
            ha_cluster_fence_agent_packages:
              - fence-agents-apc-snmp
            ha_cluster_resource_primitives:
              - id: myapc
                agent: stonith:fence_apc_snmp
                instance_attrs:
                  - attrs:
                      - name: ipaddr
                        value: zapc.example.com
                      - name: pcmk_host_map
                        value: z1.example.com:1;z2.example.com:2
                      - name: login
                        value: apc
                      - name: passwd
                        value: apc
              - id: my_lvm
                agent: ocf:heartbeat:LVM-activate
                instance_attrs:
                  - attrs:
                      - name: vgname
                        value: my_vg
                      - name: vg_access_mode
                        value: system_id
              - id: my_fs
                agent: Filesystem
                instance_attrs:
                  - attrs:
                      - name: device
                        value: /dev/my_vg/my_lv
                      - name: directory
                        value: /var/www
                      - name: fstype
                        value: xfs
              - id: VirtualIP
                agent: IPaddr2
                instance_attrs:
                  - attrs:
                      - name: ip
                        value: 198.51.100.3
                      - name: cidr_netmask
                        value: 24
              - id: Website
                agent: apache
                instance_attrs:
                  - attrs:
                      - name: configfile
                        value: /etc/httpd/conf/httpd.conf
                      - name: statusurl
                        value: http://127.0.0.1/server-status
            ha_cluster_resource_groups:
              - id: apachegroup
                resource_ids:
                  - my_lvm
                  - my_fs
                  - VirtualIP
                  - Website

    The settings specified in the example playbook include the following:

    ha_cluster_cluster_name: <cluster_name>
    The name of the cluster you are creating.
    ha_cluster_hacluster_password: <password>
    The password of the hacluster user. The hacluster user has full access to a cluster.
    ha_cluster_manage_firewall: true
    A variable that determines whether the ha_cluster RHEL system role manages the firewall.
    ha_cluster_manage_selinux: true
    A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role.
    ha_cluster_fence_agent_packages: <fence_agent_packages>
    A list of fence agent packages to install.
    ha_cluster_resource_primitives: <cluster_resources>
    A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
    ha_cluster_resource_groups: <resource_groups>
    A list of resource group definitions configured by the ha_cluster RHEL system role.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml
  5. When you use the apache resource agent to manage Apache, it does not use systemd. Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache.

    Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster.

    # /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true

    Replace the line you removed with the following three lines, specifying /var/run/httpd-website.pid as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name is Website.

    /usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null &&
    /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null &&
    /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true

Verification

  1. From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node, z1.example.com.

    If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 16:38:51 2013
    Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Online: [ z1.example.com z2.example.com ]
    
    Full list of resources:
     myapc  (stonith:fence_apc_snmp):       Started z1.example.com
     Resource Group: apachegroup
         my_lvm     (ocf::heartbeat:LVM-activate):   Started z1.example.com
         my_fs      (ocf::heartbeat:Filesystem):    Started z1.example.com
         VirtualIP  (ocf::heartbeat:IPaddr2):       Started z1.example.com
         Website    (ocf::heartbeat:apache):        Started z1.example.com
  2. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello".

    Hello
  3. To test whether the resource group running on z1.example.com fails over to node z2.example.com, put node z1.example.com in standby mode, after which the node will no longer be able to host resources.

    [root@z1 ~]# pcs node standby z1.example.com
  4. After putting node z1 in standby mode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running on z2.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 17:16:17 2013
    Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Node z1.example.com (1): standby
    Online: [ z2.example.com ]
    
    Full list of resources:
    
     myapc  (stonith:fence_apc_snmp):       Started z1.example.com
     Resource Group: apachegroup
         my_lvm     (ocf::heartbeat:LVM-activate):  Started z2.example.com
         my_fs      (ocf::heartbeat:Filesystem):    Started z2.example.com
         VirtualIP  (ocf::heartbeat:IPaddr2):       Started z2.example.com
         Website    (ocf::heartbeat:apache):        Started z2.example.com

    The website at the defined IP address should still display, without interruption.

  5. To remove z1 from standby mode, enter the following command.

    [root@z1 ~]# pcs node unstandby z1.example.com
    注記

    Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information about the resource-stickiness meta attribute, see Configuring a resource to prefer its current node.

Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る