Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 49. Creating a Red Hat High-Availability cluster with Pacemaker
Create a Red Hat High Availability two-node cluster using the pcs
command line interface with the following procedure.
Configuring the cluster in this example requires that your system include the following components:
-
2 nodes, which will be used to create the cluster. In this example, the nodes used are
z1.example.com
andz2.example.com
. - Network switches for the private network. We recommend but do not require a private network for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
-
A fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of
zapc.example.com
.
49.1. Installing cluster software
Install the cluster software and configure your system for cluster creation with the following procedure.
Procedure
On each node in the cluster, enable the repository for high availability that corresponds to your system architecture. For example, to enable the high availability repository for an x86_64 system, you can enter the following
subscription-manager
command:# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel.
# yum install pcs pacemaker fence-agents-all
Alternatively, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command.
# yum install pcs pacemaker fence-agents-model
The following command displays a list of the available fence agents.
# rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64 ...
WarningAfter you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. For more information, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster.
If you are running the
firewalld
daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.NoteYou can determine whether the
firewalld
daemon is installed on your system with therpm -q firewalld
command. If it is installed, you can determine whether it is running with thefirewall-cmd --state
command.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --add-service=high-availability
NoteThe ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Enabling ports for the High Availability Add-On shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what each port is used for.
In order to use
pcs
to configure the cluster and communicate among the nodes, you must set a password on each node for the user IDhacluster
, which is thepcs
administration account. It is recommended that the password for userhacluster
be the same on each node.# passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
Before the cluster can be configured, the
pcsd
daemon must be started and enabled to start up on boot on each node. This daemon works with thepcs
command to manage configuration across the nodes in the cluster.On each node in the cluster, execute the following commands to start the
pcsd
service and to enablepcsd
at system start.# systemctl start pcsd.service # systemctl enable pcsd.service
49.2. Installing the pcp-zeroconf package (recommended)
When you set up your cluster, it is recommended that you install the pcp-zeroconf
package for the Performance Co-Pilot (PCP) tool. PCP is Red Hat’s recommended resource-monitoring tool for RHEL systems. Installing the pcp-zeroconf
package allows you to have PCP running and collecting performance-monitoring data for the benefit of investigations into fencing, resource failures, and other events that disrupt the cluster.
Cluster deployments where PCP is enabled will need sufficient space available for PCP’s captured data on the file system that contains /var/log/pcp/
. Typical space usage by PCP varies across deployments, but 10Gb is usually sufficient when using the pcp-zeroconf
default settings, and some environments may require less. Monitoring usage in this directory over a 14-day period of typical activity can provide a more accurate usage expectation.
Procedure
To install the pcp-zeroconf
package, run the following command.
# yum install pcp-zeroconf
This package enables pmcd
and sets up data capture at a 10-second interval.
For information on reviewing PCP data, see Why did a RHEL High Availability cluster node reboot - and how can I prevent it from happening again? on the Red Hat Customer Portal.
49.3. Creating a high availability cluster
Create a Red Hat High Availability Add-On cluster with the following procedure. This example procedure creates a cluster that consists of the nodes z1.example.com
and z2.example.com
.
Procedure
Authenticate the
pcs
userhacluster
for each node in the cluster on the node from which you will be runningpcs
.The following command authenticates user
hacluster
onz1.example.com
for both of the nodes in a two-node cluster that will consist ofz1.example.com
andz2.example.com
.[root@z1 ~]# pcs host auth z1.example.com z2.example.com Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized
Execute the following command from
z1.example.com
to create the two-node clustermy_cluster
that consists of nodesz1.example.com
andz2.example.com
. This will propagate the cluster configuration files to both nodes in the cluster. This command includes the--start
option, which will start the cluster services on both nodes in the cluster.[root@z1 ~]# pcs cluster setup my_cluster --start z1.example.com z2.example.com
Enable the cluster services to run on each node in the cluster when the node is booted.
NoteFor your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the
pcs cluster start
command on that node.[root@z1 ~]# pcs cluster enable --all
You can display the current status of the cluster with the pcs cluster status
command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the --start
option of the pcs cluster setup
command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration.
[root@z1 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: z2.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum
Last updated: Thu Oct 11 16:11:18 2018
Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z2.example.com
2 Nodes configured
0 Resources configured
...
49.4. Creating a high availability cluster with multiple links
You can use the pcs cluster setup
command to create a Red Hat High Availability cluster with multiple links by specifying all of the links for each node.
The format for the basic command to create a two-node cluster with two links is as follows.
pcs cluster setup pass:quotes[cluster_name] pass:quotes[node1_name] addr=pass:quotes[node1_link0_address] addr=pass:quotes[node1_link1_address] pass:quotes[node2_name] addr=pass:quotes[node2_link0_address] addr=pass:quotes[node2_link1_address]
For the full syntax of this command, see the pcs
(8) man page.
When creating a cluster with multiple links, you should take the following into account.
-
The order of the
addr=address
parameters is important. The first address specified after a node name is forlink0
, the second one forlink1
, and so forth. -
By default, if
link_priority
is not specified for a link, the link’s priority is equal to the link number. The link priorities are then 0, 1, 2, 3, and so forth, according to the order specified, with 0 being the highest link priority. -
The default link mode is
passive
, meaning the active link with the lowest-numbered link priority is used. -
With the default values of
link_mode
andlink_priority
, the first link specified will be used as the highest priority link, and if that link fails the next link specified will be used. -
It is possible to specify up to eight links using the
knet
transport protocol, which is the default transport protocol. -
All nodes must have the same number of
addr=
parameters. -
As of RHEL 8.1, it is possible to add, remove, and change links in an existing cluster using the
pcs cluster link add
, thepcs cluster link remove
, thepcs cluster link delete
, and thepcs cluster link update
commands. - As with single-link clusters, do not mix IPv4 and IPv6 addresses in one link, although you can have one link running IPv4 and the other running IPv6.
- As with single-link clusters, you can specify addresses as IP addresses or as names as long as the names resolve to IPv4 or IPv6 addresses for which IPv4 and IPv6 addresses are not mixed in one link.
The following example creates a two-node cluster named my_twolink_cluster
with two nodes, rh80-node1
and rh80-node2
. rh80-node1
has two interfaces, IP address 192.168.122.201 as link0
and 192.168.123.201 as link1
. rh80-node2
has two interfaces, IP address 192.168.122.202 as link0
and 192.168.123.202 as link1
.
# pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202
To set a link priority to a different value than the default value, which is the link number, you can set the link priority with the link_priority
option of the pcs cluster setup
command. Each of the following two example commands creates a two-node cluster with two interfaces where the first link, link 0, has a link priority of 1 and the second link, link 1, has a link priority of 0. Link 1 will be used first and link 0 will serve as the failover link. Since link mode is not specified, it defaults to passive.
These two commands are equivalent. If you do not specify a link number following the link
keyword, the pcs
interface automatically adds a link number, starting with the lowest unused link number.
# pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link link_priority=1 link link_priority=0 # pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link linknumber=1 link_priority=0 link link_priority=1
You can set the link mode to a different value than the default value of passive
with the link_mode
option of the pcs cluster setup
command, as in the following example.
# pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active
The following example sets both the link mode and the link priority.
# pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active link link_priority=1 link link_priority=0
For information on adding nodes to an existing cluster with multiple links, see Adding a node to a cluster with multiple links.
For information on changing the links in an existing cluster with multiple links, see Adding and modifying links in an existing cluster.
49.5. Configuring fencing
You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see Configuring fencing in a Red Hat High Availability cluster.
For general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster.
When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses.
Procedure
This example uses the APC power switch with a host name of zapc.example.com
to fence the nodes, and it uses the fence_apc_snmp
fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map
option.
You create a fencing device by configuring the device as a stonith
resource with the pcs stonith create
command. The following command configures a stonith
resource named myapc
that uses the fence_apc_snmp
fencing agent for nodes z1.example.com
and z2.example.com
. The pcmk_host_map
option maps z1.example.com
to port 1, and z2.example.com
to port 2. The login value and password for the APC device are both apc
. By default, this device will use a monitor interval of sixty seconds for each node.
Note that you can use an IP address when specifying the host name for the nodes.
[root@z1 ~]# pcs stonith create myapc fence_apc_snmp ipaddr="zapc.example.com" pcmk_host_map="z1.example.com:1;z2.example.com:2" login="apc" passwd="apc"
The following command displays the parameters of an existing STONITH device.
[root@rh7-1 ~]# pcs stonith config myapc
Resource: myapc (class=stonith type=fence_apc_snmp)
Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 login=apc passwd=apc
Operations: monitor interval=60s (myapc-monitor-interval-60s)
After configuring your fence device, you should test the device. For information on testing a fence device, see Testing a fence device.
Do not test your fence device by disabling the network interface, as this will not properly test fencing.
Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node.
49.6. Backing up and restoring a cluster configuration
The following commands back up a cluster configuration in a tar archive and restore the cluster configuration files on all nodes from the backup.
Procedure
Use the following command to back up the cluster configuration in a tar archive. If you do not specify a file name, the standard output will be used.
pcs config backup filename
The pcs config backup
command backs up only the cluster configuration itself as configured in the CIB; the configuration of resource daemons is out of the scope of this command. For example if you have configured an Apache resource in the cluster, the resource settings (which are in the CIB) will be backed up, while the Apache daemon settings (as set in`/etc/httpd`) and the files it serves will not be backed up. Similarly, if there is a database resource configured in the cluster, the database itself will not be backed up, while the database resource configuration (CIB) will be.
Use the following command to restore the cluster configuration files on all cluster nodes from the backup. Specifying the --local
option restores the cluster configuration files only on the node from which you run this command. If you do not specify a file name, the standard input will be used.
pcs config restore [--local] [filename]
49.7. Enabling ports for the High Availability Add-On
The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present.
If you are running the firewalld
daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --add-service=high-availability
You may need to modify which ports are open to suit local conditions.
You can determine whether the firewalld
daemon is installed on your system with the rpm -q firewalld
command. If the firewalld
daemon is installed, you can determine whether it is running with the firewall-cmd --state
command.
The following table shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for.
Port | When Required |
---|---|
TCP 2224 |
Default
It is crucial to open port 2224 in such a way that |
TCP 3121 | Required on all nodes if the cluster has any Pacemaker Remote nodes
Pacemaker’s |
TCP 5403 |
Required on the quorum device host when using a quorum device with |
UDP 5404-5412 |
Required on corosync nodes to facilitate communication between nodes. It is crucial to open ports 5404-5412 in such a way that |
TCP 21064 |
Required on all nodes if the cluster contains any resources requiring DLM (such as |
TCP 9929, UDP 9929 | Required to be open on all cluster nodes and booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster. |