Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 47. Getting started with Pacemaker
To familiarize yourself with the tools and processes you use to create a Pacemaker cluster, you can run the following procedures. They are intended for users who are interested in seeing what the cluster software looks like and how it is administered, without needing to configure a working cluster.
These procedures do not create a supported Red Hat cluster, which requires at least two nodes and the configuration of a fencing device. For full information on Red Hat’s support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters.
47.1. Learning to use Pacemaker
By working through this procedure, you will learn how to use Pacemaker to set up a cluster, how to display cluster status, and how to configure a cluster service. This example creates an Apache HTTP server as a cluster resource and shows how the cluster responds when the resource fails.
In this example:
-
The node is
z1.example.com
. - The floating IP address is 192.168.122.120.
Prerequisites
- A single node running RHEL 8
- A floating IP address that resides on the same network as one of the node’s statically assigned IP addresses
-
The name of the node on which you are running is in your
/etc/hosts
file
Procedure
Install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the
pcsd
service.# yum install pcs pacemaker fence-agents-all ... # systemctl start pcsd.service # systemctl enable pcsd.service
If you are running the
firewalld
daemon, enable the ports that are required by the Red Hat High Availability Add-On.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Set a password for user
hacluster
on each node in the cluster and authenticate userhacluster
for each node in the cluster on the node from which you will be running thepcs
commands. This example is using only a single node, the node from which you are running the commands, but this step is included here since it is a necessary step in configuring a supported Red Hat High Availability multi-node cluster.# passwd hacluster ... # pcs host auth z1.example.com
Create a cluster named
my_cluster
with one member and check the status of the cluster. This command creates and starts the cluster in one step.# pcs cluster setup my_cluster --start z1.example.com ... # pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 1 node configured 0 resources configured PCSD Status: z1.example.com: Online
A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in Fencing in a Red Hat High Availability Cluster. For this introduction, however, which is intended to show only how to use the basic Pacemaker commands, disable fencing by setting the
stonith-enabled
cluster option tofalse
.WarningThe use of
stonith-enabled=false
is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced.# pcs property set stonith-enabled=false
Configure a web browser on your system and create a web page to display a simple text message. If you are running the
firewalld
daemon, enable the ports that are required byhttpd
.NoteDo not use
systemctl enable
to enable any services that will be managed by the cluster to start at system boot.# yum install -y httpd wget ... # firewall-cmd --permanent --add-service=http # firewall-cmd --reload # cat <<-END >/var/www/html/index.html <html> <body>My Test Site - $(hostname)</body> </html> END
In order for the Apache resource agent to get the status of Apache, create the following addition to the existing configuration to enable the status server URL.
# cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END
Create
IPaddr2
andapache
resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource’s NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node.You can display a list of all available resource types with the
pcs resource list
command. You can use thepcs resource describe resourcetype
command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of typeapache
:# pcs resource describe apache ...
In this example, the IP address resource and the apache resource are both configured as part of a group named
apachegroup
, which ensures that the resources are kept together to run on the same node when you are configuring a working multi-node cluster.# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup # pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" --group apachegroup # pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 1 node configured 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online ...
After you have configured a cluster resource, you can use the
pcs resource config
command to display the options that are configured for that resource.# pcs resource config WebSite Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-interval-0s) stop interval=0s timeout=60s (WebSite-stop-interval-0s) monitor interval=1min (WebSite-monitor-interval-1min)
- Point your browser to the website you created using the floating IP address you configured. This should display the text message you defined.
Stop the apache web service and check the cluster status. Using
killall -9
simulates an application-level crash.# killall -9 httpd
Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service and you should still be able to access the website.
# pcs status Cluster name: my_cluster ... Current DC: z1.example.com (version 1.1.13-10.el7-44eb2dd) - partition with quorum 1 node and 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=13, status=complete, exitreason='none', last-rc-change='Thu Oct 11 23:45:50 2016', queued=0ms, exec=0ms PCSD Status: z1.example.com: Online
You can clear the failure status on the resource that failed once the service is up and running again and the failed action notice will no longer appear when you view the cluster status.
# pcs resource cleanup WebSite
When you are finished looking at the cluster and the cluster status, stop the cluster services on the node. Even though you have only started services on one node for this introduction, the
--all
parameter is included since it would stop cluster services on all nodes on an actual multi-node cluster.# pcs cluster stop --all
47.2. Learning to configure failover
The following procedure provides an introduction to creating a Pacemaker cluster running a service that will fail over from one node to another when the node on which the service is running becomes unavailable. By working through this procedure, you can learn how to create a service in a two-node cluster and you can then observe what happens to that service when it fails on the node on which it running.
This example procedure configures a two-node Pacemaker cluster running an Apache HTTP server. You can then stop the Apache service on one node to see how the service remains available.
In this example:
-
The nodes are
z1.example.com
andz2.example.com
. - The floating IP address is 192.168.122.120.
Prerequisites
- Two nodes running RHEL 8 that can communicate with each other
- A floating IP address that resides on the same network as one of the node’s statically assigned IP addresses
-
The name of the node on which you are running is in your
/etc/hosts
file
Procedure
On both nodes, install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the
pcsd
service.# yum install pcs pacemaker fence-agents-all ... # systemctl start pcsd.service # systemctl enable pcsd.service
If you are running the
firewalld
daemon, on both nodes enable the ports that are required by the Red Hat High Availability Add-On.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
On both nodes in the cluster, set a password for user
hacluster
.# passwd hacluster
Authenticate user
hacluster
for each node in the cluster on the node from which you will be running thepcs
commands.# pcs host auth z1.example.com z2.example.com
Create a cluster named
my_cluster
with both nodes as cluster members. This command creates and starts the cluster in one step. You only need to run this from one node in the cluster becausepcs
configuration commands take effect for the entire cluster.On one node in cluster, run the following command.
# pcs cluster setup my_cluster --start z1.example.com z2.example.com
A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in Fencing in a Red Hat High Availability Cluster. For this introduction, however, to show only how failover works in this configuration, disable fencing by setting the
stonith-enabled
cluster option tofalse
WarningThe use of
stonith-enabled=false
is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced.# pcs property set stonith-enabled=false
After creating a cluster and disabling fencing, check the status of the cluster.
NoteWhen you run the
pcs cluster status
command, it may show output that temporarily differs slightly from the examples as the system components start up.# pcs cluster status Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 2 nodes configured 0 resources configured PCSD Status: z1.example.com: Online z2.example.com: Online
On both nodes, configure a web browser and create a web page to display a simple text message. If you are running the
firewalld
daemon, enable the ports that are required byhttpd
.NoteDo not use
systemctl enable
to enable any services that will be managed by the cluster to start at system boot.# yum install -y httpd wget ... # firewall-cmd --permanent --add-service=http # firewall-cmd --reload # cat <<-END >/var/www/html/index.html <html> <body>My Test Site - $(hostname)</body> </html> END
In order for the Apache resource agent to get the status of Apache, on each node in the cluster create the following addition to the existing configuration to enable the status server URL.
# cat <<-END > /etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 Allow from ::1 </Location> END
Create
IPaddr2
andapache
resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource’s NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node.You can display a list of all available resource types with the
pcs resource list
command. You can use thepcs resource describe resourcetype
command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of typeapache
:# pcs resource describe apache ...
In this example, the IP address resource and the apache resource are both configured as part of a group named
apachegroup
, which ensures that the resources are kept together to run on the same node.Run the following commands from one node in the cluster:
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.122.120 --group apachegroup # pcs resource create WebSite ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" --group apachegroup # pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online ...
Note that in this instance, the
apachegroup
service is running on node z1.example.com.Access the website you created, stop the service on the node on which it is running, and note how the service fails over to the second node.
- Point a browser to the website you created using the floating IP address you configured. This should display the text message you defined, displaying the name of the node on which the website is running.
Stop the apache web service. Using
killall -9
simulates an application-level crash.# killall -9 httpd
Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service on the node on which it had been running and you should still be able to access the web browser.
# pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=31, status=complete, exitreason='none', last-rc-change='Fri Feb 5 21:01:41 2016', queued=0ms, exec=0ms
Clear the failure status once the service is up and running again.
# pcs resource cleanup WebSite
Put the node on which the service is running into standby mode. Note that since we have disabled fencing we can not effectively simulate a node-level failure (such as pulling a power cable) because fencing is required for the cluster to recover from such situations.
# pcs node standby z1.example.com
Check the status of the cluster and note where the service is now running.
# pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Node z1.example.com: standby Online: [ z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z2.example.com WebSite (ocf::heartbeat:apache): Started z2.example.com
- Access the website. There should be no loss of service, although the display message should indicate the node on which the service is now running.
To restore cluster services to the first node, take the node out of standby mode. This will not necessarily move the service back to that node.
# pcs node unstandby z1.example.com
For final cleanup, stop the cluster services on both nodes.
# pcs cluster stop --all