Appendix B. Configuration Example Using pcs Commands
This appendix provides a step-by-step procedure for configuring a two-node Red Hat Enterprise Linux High Availability Add-On cluster, using the
pcs
command, in Red Hat Enterprise Linux release 6.6 and later. It also describes how to configure an Apache HTTP server in this cluster.
Configuring the cluster provided in this chapter requires that your system include the following components:
- 2 nodes, which will be used to create the cluster. In this example, the nodes used are
z1.example.com
andz2.example.com
. - Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
- A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of
zapc.example.com
.
B.1. Initial System Setup
This section describes the initial setup of the system that you will use to create the cluster.
B.1.1. Installing the Cluster Software
Use the following procedure to install the cluster software.
- Ensure that
pcs
,pacemaker
,cman
, andfence-agents
are installed.yum install -y pcs pacemaker cman fence-agents
- After installation, to prevent
corosync
from starting withoutcman
, execute the following commands on all nodes in the cluster.#
chkconfig corosync off
#chkconfig cman off
B.1.2. Creating and Starting the Cluster
This section provides the procedure for creating the initial cluster, on which you will configure the cluster resources.
- In order to use
pcs
to configure the cluster and communicate among the nodes, you must set a password on each node for the user IDhacluster
, which is thepcs
administration account. It is recommended that the password for userhacluster
be the same on each node.#
passwd hacluster
Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully. - Before the cluster can be configured, the
pcsd
daemon must be started. This daemon works with thepcs
command to manage configuration across the nodes in the cluster.On each node in the cluster, execute the following commands to start thepcsd
service and to enablepcsd
at system start.#
service pcsd start
#chkconfig pcsd on
- Authenticate the
pcs
userhacluster
for each node in the cluster on the node from which you will be runningpcs
.The following command authenticates userhacluster
onz1.example.com
for both of the nodes in the example two-node cluster,z1.example.com
andz2.example.com
.root@z1 ~]#
pcs cluster auth z1.example.com z2.example.com
Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized - Execute the following command from
z1.example.com
to create the two-node clustermycluster
that consists of nodesz1.example.com
andz2.example.com
. This will propagate the cluster configuration files to both nodes in the cluster. This command includes the--start
option, which will start the cluster services on both nodes in the cluster.[root@z1 ~]#
pcs cluster setup --start --name my_cluster
\z1.example.com z2.example.com
z1.example.com: Succeeded z1.example.com: Starting Cluster... z2.example.com: Succeeded z2.example.com: Starting Cluster... - Optionally, you can enable the cluster services to run on each node in the cluster when the node is booted.
Note
For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing thepcs cluster start
command on that node.#
pcs cluster enable --all
You can display the current status of the cluster with the
pcs cluster status
command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the --start
option of the pcs cluster setup
command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration.
[root@z1 ~]# pcs cluster status
Cluster Status:
Last updated: Thu Jul 25 13:01:26 2013
Last change: Thu Jul 25 13:04:45 2013 via crmd on z2.example.com
Stack: corosync
Current DC: z2.example.com (2) - partition with quorum
Version: 1.1.10-5.el7-9abe687
2 Nodes configured
0 Resources configured