Search

Chapter 5. Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster

download PDF

Configure an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster with the following procedure. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.

The following illustration shows a high-level overview of the cluster in which the cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. In this illustration, the web server is running on Node 1 while Node 2 is available to run the server if Node 1 becomes inoperative.

Figure 5.1. Apache in a Red Hat High Availability Two-Node Cluster

Apache in a Red Hat High Availability Two-Node Cluster

This use case requires that your system include the following components:

  • A two-node Red Hat High Availability cluster with power fencing configured for each node. We recommend but do not require a private network. This procedure uses the cluster example provided in Creating a Red Hat High-Availability cluster with Pacemaker.
  • A public virtual IP address, required for Apache.
  • Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device.

The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will be performing the following procedures:

  1. Configure an XFS file system on the logical volume my_lv.
  2. Configure a web server.

After performing these steps, you create the resource group and the resources it contains.

5.1. Configuring an LVM volume with an XFS file system in a Pacemaker cluster

Create an LVM logical volume on storage that is shared between the nodes of the cluster with the following procedure.

Note

LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.

The following procedure creates an LVM logical volume and then creates an XFS file system on that volume for use in a Pacemaker cluster. In this example, the shared partition /dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created.

Procedure

  1. On both nodes of the cluster, perform the following steps to set the value for the LVM system ID to the value of the uname identifier for the system. The LVM system ID will be used to ensure that only the cluster is capable of activating the volume group.

    1. Set the system_id_source configuration option in the /etc/lvm/lvm.conf configuration file to uname.

      # Configuration option global/system_id_source.
      system_id_source = "uname"
    2. Verify that the LVM system ID on the node matches the uname for the node.

      # lvm systemid
        system ID: z1.example.com
      # uname -n
        z1.example.com
  2. Create the LVM volume and create an XFS file system on that volume. Since the /dev/sdb1 partition is storage that is shared, you perform this part of the procedure on one node only.

    Note

    If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker.

    1. Create an LVM physical volume on partition /dev/sdb1.

      [root@z1 ~]# pvcreate /dev/sdb1
        Physical volume "/dev/sdb1" successfully created
      Note

      If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker.

    2. Create the volume group my_vg that consists of the physical volume /dev/sdb1.

      Specify the --setautoactivation n flag to ensure that volume groups managed by Pacemaker in a cluster will not be automatically activated on startup. If you are using an existing volume group for the LVM volume you are creating, you can reset this flag with the vgchange --setautoactivation n command for the volume group.

      [root@z1 ~]# vgcreate --setautoactivation n my_vg /dev/sdb1
        Volume group "my_vg" successfully created
    3. Verify that the new volume group has the system ID of the node on which you are running and from which you created the volume group.

      [root@z1 ~]# vgs -o+systemid
        VG    #PV #LV #SN Attr   VSize  VFree  System ID
        my_vg   1   0   0 wz--n- <1.82t <1.82t z1.example.com
    4. Create a logical volume using the volume group my_vg.

      [root@z1 ~]# lvcreate -L450 -n my_lv my_vg
        Rounding up size to full physical extent 452.00 MiB
        Logical volume "my_lv" created

      You can use the lvs command to display the logical volume.

      [root@z1 ~]# lvs
        LV      VG      Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert
        my_lv   my_vg   -wi-a---- 452.00m
        ...
    5. Create an XFS file system on the logical volume my_lv.

      [root@z1 ~]# mkfs.xfs /dev/my_vg/my_lv
      meta-data=/dev/my_vg/my_lv       isize=512    agcount=4, agsize=28928 blks
               =                       sectsz=512   attr=2, projid32bit=1
      ...
  3. If the use of a devices file is enabled with the use_devicesfile = 1 parameter in the lvm.conf file, add the shared device to the devices file on the second node in the cluster. This feature is enabled by default.

    [root@z2 ~]# lvmdevices --adddev /dev/sdb1

5.2. Configuring an Apache HTTP Server

Configure an Apache HTTP Server with the following procedure.

Procedure

  1. Ensure that the Apache HTTP Server is installed on each node in the cluster. You also need the wget tool installed on the cluster to be able to check the status of the Apache HTTP Server.

    On each node, execute the following command.

    # dnf install -y httpd wget

    If you are running the firewalld daemon, on each node in the cluster enable the ports that are required by the Red Hat High Availability Add-On and enable the ports you will require for running httpd. This example enables the httpd ports for public access, but the specific ports to enable for httpd may vary for production use.

    # firewall-cmd --permanent --add-service=http
    # firewall-cmd --permanent --zone=public --add-service=http
    # firewall-cmd --reload
  2. In order for the Apache resource agent to get the status of Apache, on each node in the cluster create the following addition to the existing configuration to enable the status server URL.

    # cat <<-END > /etc/httpd/conf.d/status.conf
    <Location /server-status>
        SetHandler server-status
        Require local
    </Location>
    END
  3. Create a web page for Apache to serve up.

    On one node in the cluster, ensure that the logical volume you created in Configuring an LVM volume with an XFS file system is activated, mount the file system that you created on that logical volume, create the file index.html on that file system, and then unmount the file system.

    # lvchange -ay my_vg/my_lv
    # mount /dev/my_vg/my_lv /var/www/
    # mkdir /var/www/html
    # mkdir /var/www/cgi-bin
    # mkdir /var/www/error
    # restorecon -R /var/www
    # cat <<-END >/var/www/html/index.html
    <html>
    <body>Hello</body>
    </html>
    END
    # umount /var/www

5.3. Creating the resources and resource groups

Create the resources for your cluster with the following procedure. To ensure these resources all run on the same node, they are configured as part of the resource group apachegroup. The resources to create are as follows, listed in the order in which they will start.

  1. An LVM-activate resource named my_lvm that uses the LVM volume group you created in Configuring an LVM volume with an XFS file system.
  2. A Filesystem resource named my_fs, that uses the file system device /dev/my_vg/my_lv you created in Configuring an LVM volume with an XFS file system.
  3. An IPaddr2 resource, which is a floating IP address for the apachegroup resource group. The IP address must not be one already associated with a physical node. If the IPaddr2 resource’s NIC device is not specified, the floating IP must reside on the same network as one of the node’s statically assigned IP addresses, otherwise the NIC device to assign the floating IP address cannot be properly detected.
  4. An apache resource named Website that uses the index.html file and the Apache configuration you defined in Configuring an Apache HTTP server.

The following procedure creates the resource group apachegroup and the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only.

Procedure

  1. The following command creates the LVM-activate resource my_lvm. Because the resource group apachegroup does not yet exist, this command creates the resource group.

    Note

    Do not configure more than one LVM-activate resource that uses the same LVM volume group in an active/passive HA configuration, as this could cause data corruption. Additionally, do not configure an LVM-activate resource as a clone resource in an active/passive HA configuration.

    [root@z1 ~]# pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group apachegroup

    When you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started.

    # pcs resource status
     Resource Group: apachegroup
         my_lvm	(ocf::heartbeat:LVM-activate):	Started

    You can manually stop and start an individual resource with the pcs resource disable and pcs resource enable commands.

  2. The following commands create the remaining resources for the configuration, adding them to the existing resource group apachegroup.

    [root@z1 ~]# pcs resource create my_fs Filesystem device="/dev/my_vg/my_lv" directory="/var/www" fstype="xfs" --group apachegroup
    
    [root@z1 ~]# pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup
    
    [root@z1 ~]# pcs resource create Website apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group apachegroup
  3. After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 16:38:51 2013
    Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Online: [ z1.example.com z2.example.com ]
    
    Full list of resources:
     myapc	(stonith:fence_apc_snmp):	Started z1.example.com
     Resource Group: apachegroup
         my_lvm	(ocf::heartbeat:LVM-activate):	Started z1.example.com
         my_fs	(ocf::heartbeat:Filesystem):	Started z1.example.com
         VirtualIP	(ocf::heartbeat:IPaddr2):	Started z1.example.com
         Website	(ocf::heartbeat:apache):	Started z1.example.com

    Note that if you have not configured a fencing device for your cluster, by default the resources do not start.

  4. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello".

    Hello

    If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration.

  5. When you use the apache resource agent to manage Apache, it does not use systemd. Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache.

    Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster.

    /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true

    Replace the line you removed with the following three lines, specifying /var/run/httpd-website.pid as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name is Website.

    /usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null &&
    /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null &&
    /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true

5.4. Testing the resource configuration

Test the resource configuration in a cluster with the following procedure.

In the cluster status display shown in Creating the resources and resource groups, all of the resources are running on node z1.example.com. You can test whether the resource group fails over to node z2.example.com by using the following procedure to put the first node in standby mode, after which the node will no longer be able to host resources.

Procedure

  1. The following command puts node z1.example.com in standby mode.

    [root@z1 ~]# pcs node standby z1.example.com
  2. After putting node z1 in standby mode, check the cluster status. Note that the resources should now all be running on z2.

    [root@z1 ~]# pcs status
    Cluster name: my_cluster
    Last updated: Wed Jul 31 17:16:17 2013
    Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com
    Stack: corosync
    Current DC: z2.example.com (2) - partition with quorum
    Version: 1.1.10-5.el7-9abe687
    2 Nodes configured
    6 Resources configured
    
    Node z1.example.com (1): standby
    Online: [ z2.example.com ]
    
    Full list of resources:
    
     myapc	(stonith:fence_apc_snmp):	Started z1.example.com
     Resource Group: apachegroup
         my_lvm	(ocf::heartbeat:LVM-activate):	Started z2.example.com
         my_fs	(ocf::heartbeat:Filesystem):	Started z2.example.com
         VirtualIP	(ocf::heartbeat:IPaddr2):	Started z2.example.com
         Website	(ocf::heartbeat:apache):	Started z2.example.com

    The web site at the defined IP address should still display, without interruption.

  3. To remove z1 from standby mode, enter the following command.

    [root@z1 ~]# pcs node unstandby z1.example.com
    Note

    Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information about the resource-stickiness meta attribute, see Configuring a resource to prefer its current node.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.