이 콘텐츠는 선택한 언어로 제공되지 않습니다.
9.4. The pacemaker_remote Service
The
pacemaker_remote
service allows nodes not running corosync
to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes.
Among the capabilities that the
pacemaker_remote
service provides are the following:
- The
pacemaker_remote
service allows you to scale beyond the Red Hat support limit of 32 nodes for RHEL 7.7. - The
pacemaker_remote
service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources.
The following terms are used to describe the
pacemaker_remote
service.
- cluster node — A node running the High Availability services (
pacemaker
andcorosync
). - remote node — A node running
pacemaker_remote
to remotely integrate into the cluster without requiringcorosync
cluster membership. A remote node is configured as a cluster resource that uses theocf:pacemaker:remote
resource agent. - guest node — A virtual guest node running the
pacemaker_remote
service. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node. - pacemaker_remote — A service daemon capable of performing remote application management within remote nodes and guest nodes (KVM and LXC) in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker’s local resource management daemon (LRMD) that is capable of managing resources remotely on a node not running corosync.
- LXC — A Linux Container defined by the
libvirt-lxc
Linux container driver.
A Pacemaker cluster running the
pacemaker_remote
service has the following characteristics.
- Remote nodes and guest nodes run the
pacemaker_remote
service (with very little configuration required on the virtual machine side). - The cluster stack (
pacemaker
andcorosync
), running on the cluster nodes, connects to thepacemaker_remote
service on the remote nodes, allowing them to integrate into the cluster. - The cluster stack (
pacemaker
andcorosync
), running on the cluster nodes, launches the guest nodes and immediately connects to thepacemaker_remote
service on the guest nodes, allowing them to integrate into the cluster.
The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations:
- they do not take place in quorum
- they do not execute fencing device actions
- they are not eligible to be the cluster's Designated Controller (DC)
- they do not themselves run the full range of
pcs
commands
On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack.
Other than these noted limitations, the remote and guest nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the
pcs
commands. Remote and guest nodes appear in cluster status output just as cluster nodes do.
9.4.1. Host and Guest Authentication
The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running
pacemaker_remote
must share the same private key. By default this key must be placed at /etc/pacemaker/authkey
on both cluster nodes and remote nodes.
As of Red Hat Enterprise Linux 7.4, the
pcs cluster node add-guest
command sets up the authkey
for guest nodes and the pcs cluster node add-remote
command sets up the authkey
for remote nodes.
9.4.2. Guest Node Resource Options
When configuring a virtual machine or LXC resource to act as a guest node, you create a
VirtualDomain
resource, which manages the virtual machine. For descriptions of the options you can set for a VirtualDomain
resource, see Table 9.3, “Resource Options for Virtual Domain Resources”.
In addition to the
VirtualDomain
resource options, metadata options define the resource as a guest node and define the connection parameters. As of Red Hat Enterprise Linux 7.4, you should set these resource options with the pcs cluster node add-guest
command. In releases earlier than 7.4, you can set these options when creating the resource. Table 9.4, “Metadata Options for Configuring KVM/LXC Resources as Remote Nodes” describes these metadata options.
Field | Default | Description |
---|---|---|
remote-node
|
<none>
|
The name of the guest node this resource defines. This both enables the resource as a guest node and defines the unique name used to identify the guest node. WARNING: This value cannot overlap with any resource or node IDs.
|
remote-port
|
3121
|
Configures a custom port to use for the guest connection to
pacemaker_remote
|
remote-addr
| remote-node value used as host name
|
The IP address or host name to connect to if remote node’s name is not the host name of the guest
|
remote-connect-timeout
|
60s
|
Amount of time before a pending guest connection will time out
|
9.4.3. Remote Node Resource Options
A remote node is defined as a cluster resource with
ocf:pacemaker:remote
as the resource agent. In Red Hat Enterprise Linux 7.4, you should create this resource with the pcs cluster node add-remote
command. In releases earlier than 7.4, you can create this resource with the pcs resource create
command. Table 9.5, “Resource Options for Remote Nodes” describes the resource options you can configure for a remote
resource.
Field | Default | Description |
---|---|---|
reconnect_interval
|
0
|
Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval.
|
server
| |
Server location to connect to. This can be an IP address or host name.
|
port
| |
TCP port to connect to.
|
9.4.4. Changing Default Port Location
If you need to change the default port location for either Pacemaker or
pacemaker_remote
, you can set the PCMK_remote_port
environment variable that affects both of these daemons. This environment variable can be enabled by placing it in the /etc/sysconfig/pacemaker
file as follows.
#==#==# Pacemaker Remote ... # # Specify a custom port for Pacemaker Remote connections PCMK_remote_port=3121
When changing the default port used by a particular guest node or remote node, the
PCMK_remote_port
variable must be set in that node's /etc/sysconfig/pacemaker
file, and the cluster resource creating the guest node or remote node connection must also be configured with the same port number (using the remote-port
metadata option for guest nodes, or the port
option for remote nodes).
9.4.5. Configuration Overview: KVM Guest Node
This section provides a high-level summary overview of the steps to perform to have Pacemaker launch a virtual machine and to integrate that machine as a guest node, using
libvirt
and KVM virtual guests.
- Configure the
VirtualDomain
resources, as described in Section 9.3, “Configuring a Virtual Domain as a Resource”. - On systems running Red Hat Enterprise Linux 7.3 and earlier, put the same encryption key with the path
/etc/pacemaker/authkey
on every cluster node and virtual machine with the following procedure. This secures remote communication and authentication.- Enter the following set of commands on every node to create the
authkey
directory with secure permissions.#
mkdir -p --mode=0750 /etc/pacemaker
#chgrp haclient /etc/pacemaker
- The following command shows one method to create an encryption key. You should create the key only once and then copy it to all of the nodes.
#
dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
- For Red Hat Enterprise Linux 7.4, enter the following commands on every virtual machine to install
pacemaker_remote
packages, start thepcsd
service and enable it to run on startup, and allow TCP port 3121 through the firewall.#
yum install pacemaker-remote resource-agents pcs
#systemctl start pcsd.service
#systemctl enable pcsd.service
#firewall-cmd --add-port 3121/tcp --permanent
#firewall-cmd --add-port 2224/tcp --permanent
#firewall-cmd --reload
For Red Hat Enterprise Linux 7.3 and earlier, run the following commands on every virtual machine to installpacemaker_remote
packages, start thepacemaker_remote
service and enable it to run on startup, and allow TCP port 3121 through the firewall.#
yum install pacemaker-remote resource-agents pcs
#systemctl start pacemaker_remote.service
#systemctl enable pacemaker_remote.service
#firewall-cmd --add-port 3121/tcp --permanent
#firewall-cmd --add-port 2224/tcp --permanent
#firewall-cmd --reload
- Give each virtual machine a static network address and unique host name, which should be known to all nodes. For information on setting a static IP address for the guest virtual machine, see the Virtualization Deployment and Administration Guide.
- For Red Hat Enterprise Linux 7.4 and later, use the following command to convert an existing
VirtualDomain
resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. In addition to converting the resource, this command copies the/etc/pacemaker/authkey
to the guest node and starts and enables thepacemaker_remote
daemon on the guest node.pcs cluster node add-guest hostname resource_id [options]
For Red Hat Enterprise Linux 7.3 and earlier, use the following command to convert an existingVirtualDomain
resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added.pcs cluster remote-node add hostname resource_id [options]
- After creating the
VirtualDomain
resource, you can treat the guest node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the guest node as in the following commands, which are run from a cluster node. As of Red Hat Enterprise Linux 7.3, you can include guest nodes in groups, which allows you to group a storage device, file system, and VM.#
pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
#pcs constraint location webserver prefers guest1
9.4.6. Configuration Overview: Remote Node (Red Hat Enterprise Linux 7.4)
This section provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment for Red Hat Enterprise Linux 7.4.
- On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall.
#
firewall-cmd --permanent --add-service=high-availability
success #firewall-cmd --reload
successNote
If you are usingiptables
directly, or some other firewall solution besidesfirewalld
, simply open the following ports: TCP ports 2224 and 3121. - Install the
pacemaker_remote
daemon on the remote node.#
yum install -y pacemaker-remote resource-agents pcs
- Start and enable
pcsd
on the remote node.#
systemctl start pcsd.service
#systemctl enable pcsd.service
- If you have not already done so, authenticate
pcs
to the node you will be adding as a remote node.#
pcs cluster auth remote1
- Add the remote node resource to the cluster with the following command. This command also syncs all relevant configuration files to the new node, starts the node, and configures it to start
pacemaker_remote
on boot. This command must be run on a cluster node and not on the remote node which is being added.#
pcs cluster node add-remote remote1
- After adding the
remote
resource to the cluster, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.#
pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
#pcs constraint location webserver prefers remote1
Warning
Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. - Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.
9.4.7. Configuration Overview: Remote Node (Red Hat Enterprise Linux 7.3 and earlier)
This section provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment in a Red Hat Enterprise Linux 7.3 (and earlier) system.
- On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall.
#
firewall-cmd --permanent --add-service=high-availability
success #firewall-cmd --reload
successNote
If you are usingiptables
directly, or some other firewall solution besidesfirewalld
, simply open the following ports: TCP ports 2224 and 3121. - Install the
pacemaker_remote
daemon on the remote node.#
yum install -y pacemaker-remote resource-agents pcs
- All nodes (both cluster nodes and remote nodes) must have the same authentication key installed for the communication to work correctly. If you already have a key on an existing node, use that key and copy it to the remote node. Otherwise, create a new key on the remote node.Enter the following set of commands on the remote node to create a directory for the authentication key with secure permissions.
#
mkdir -p --mode=0750 /etc/pacemaker
#chgrp haclient /etc/pacemaker
The following command shows one method to create an encryption key on the remote node.#
dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
- Start and enable the
pacemaker_remote
daemon on the remote node.#
systemctl enable pacemaker_remote.service
#systemctl start pacemaker_remote.service
- On the cluster node, create a location for the shared authentication key with the same path as the authentication key on the remote node and copy the key into that directory. In this example, the key is copied from the remote node where the key was created.
#
mkdir -p --mode=0750 /etc/pacemaker
#chgrp haclient /etc/pacemaker
#scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkey
- Enter the following command from a cluster node to create a
remote
resource. In this case the remote node isremote1
.#
pcs resource create remote1 ocf:pacemaker:remote
- After creating the
remote
resource, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.#
pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
#pcs constraint location webserver prefers remote1
Warning
Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. - Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.
9.4.8. System Upgrades and pacemaker_remote
As of Red Hat Enterprise Linux 7.3, if the
pacemaker_remote
service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote
is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote
is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed.
If you wish to avoid monitor failures when the
pacemaker_remote
service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote
Warning
For Red Hat Enterprise Linux release 7.2 and earlier, if
pacemaker_remote
stops on a node that is currently integrated into a cluster, the cluster will fence that node. If the stop happens automatically as part of a yum update
process, the system could be left in an unusable state (particularly if the kernel is also being upgraded at the same time as pacemaker_remote
). For Red Hat Enterprise Linux release 7.2 and earlier you must use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote
.
- Stop the node's connection resource with the
pcs resource disable resourcename
, which will move all services off the node. For guest nodes, this will also stop the VM, so the VM must be started outside the cluster (for example, usingvirsh
) to perform any maintenance. - Perform the required maintenance.
- When ready to return the node to the cluster, re-enable the resource with the
pcs resource enable
.