Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 1. Creating a Red Hat High-Availability Cluster with Pacemaker
This chapter describes the procedure for creating a Red Hat High Availability two-node cluster using
pcs
. After you have created a cluster, you can configure the resources and resource groups that you require.
Configuring the cluster provided in this chapter requires that your system include the following components:
- 2 nodes, which will be used to create the cluster. In this example, the nodes used are
z1.example.com
andz2.example.com
. - Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
- A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of
zapc.example.com
.
This chapter is divided into three sections.
- Section 1.1, “Cluster Software Installation” provides the procedure for installing the cluster software.
- Section 1.2, “Cluster Creation” provides the procedure for configuring a two-node cluster.
- Section 1.3, “Fencing Configuration” provides the procedure for configuring fencing devices for each node of the cluster.
1.1. Cluster Software Installation
The procedure for installing and configuring a cluster is as follows.
- On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel.
#
yum install pcs pacemaker fence-agents-all
- If you are running the
firewalld
daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.Note
You can determine whether thefirewalld
daemon is installed on your system with therpm -q firewalld
command. If thefirewalld
daemon is installed, you can determine whether it is running with thefirewall-cmd --state
command.#
firewall-cmd --permanent --add-service=high-availability
#firewall-cmd --add-service=high-availability
- In order to use
pcs
to configure the cluster and communicate among the nodes, you must set a password on each node for the user IDhacluster
, which is thepcs
administration account. It is recommended that the password for userhacluster
be the same on each node.#
passwd hacluster
Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully. - Before the cluster can be configured, the
pcsd
daemon must be started and enabled to boot on startup on each node. This daemon works with thepcs
command to manage configuration across the nodes in the cluster.On each node in the cluster, execute the following commands to start thepcsd
service and to enablepcsd
at system start.#
systemctl start pcsd.service
#systemctl enable pcsd.service
- Authenticate the
pcs
userhacluster
for each node in the cluster on the node from which you will be runningpcs
.The following command authenticates userhacluster
onz1.example.com
for both of the nodes in the example two-node cluster,z1.example.com
andz2.example.com
.[root@z1 ~]#
pcs cluster auth z1.example.com z2.example.com
Username:hacluster
Password: z1.example.com: Authorized z2.example.com: Authorized