Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 31. Integrating non-corosync nodes into a cluster: the pacemaker_remote service


The pacemaker_remote service allows nodes not running corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes.

Among the capabilities that the pacemaker_remote service provides are the following:

  • The pacemaker_remote service allows you to scale beyond the Red Hat support limit of 32 nodes.
  • The pacemaker_remote service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources.

The following terms are used to describe the pacemaker_remote service.

  • cluster node - A node running the High Availability services (pacemaker and corosync).
  • remote node - A node running pacemaker_remote to remotely integrate into the cluster without requiring corosync cluster membership. A remote node is configured as a cluster resource that uses the ocf:pacemaker:remote resource agent.
  • guest node - A virtual guest node running the pacemaker_remote service. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node.
  • pacemaker_remote - A service daemon capable of performing remote application management within remote nodes and KVM guest nodes in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker’s local executor daemon (pacemaker-execd) that is capable of managing resources remotely on a node not running corosync.

A Pacemaker cluster running the pacemaker_remote service has the following characteristics.

  • Remote nodes and guest nodes run the pacemaker_remote service (with very little configuration required on the virtual machine side).
  • The cluster stack (pacemaker and corosync), running on the cluster nodes, connects to the pacemaker_remote service on the remote nodes, allowing them to integrate into the cluster.
  • The cluster stack (pacemaker and corosync), running on the cluster nodes, launches the guest nodes and immediately connects to the pacemaker_remote service on the guest nodes, allowing them to integrate into the cluster.

The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations:

  • they do not take place in quorum
  • they do not execute fencing device actions
  • they are not eligible to be the cluster’s Designated Controller (DC)
  • they do not themselves run the full range of pcs commands

On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack.

Other than these noted limitations, the remote and guest nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do.

31.1. Host and guest authentication of pacemaker_remote nodes

Pacemaker supports two methods of securing the connection between pacemaker nodes and pacemaker_remote nodes:

  • Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP.
  • TLS with SSL certificates (RHEL 9.6 and later). With this method, you can use existing certificates to secure the connection.

31.1.1. TLS with PSK encryption

When you configure a guest node with the cluster node add-guest command or when you configure a remote node with the cluster node add-remote command, the connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP, using port 3121 by default. This means both the cluster node and the node running pacemaker_remote must share the same private key. By default this key must be placed at /etc/pacemaker/authkey on both cluster nodes and remote nodes.

The first time you run the pcs cluster node add-guest command or the pcs cluster node add-remote command, it creates the authkey and installs it on all existing nodes in the cluster. When you later create a new node of any type, the existing authkey is copied to the new node.

31.1.2. Configuring SSL/TLS certificates

You can encrypt Pacemaker remote connections using X.509 (SSL/TLS) certificates. With this method, you can reuse existing host certificates for Pacemaker remote connections rather than private shared keys.

To configure SSL/TLS certificates, create a remote connection with the pcs cluster node add-guest command or the pcs cluster node add-remote command. You can then convert the remote connection to use certificates.

Procedure

Use the following procedure to configure SSL/TLS certificates for securing the connection between Pacemaker nodes and remote nodes.

  1. Create a remote connection with the pcs cluster node add-guest command or the pcs cluster node add-remote command. This sets up the authkey for guest nodes or remote nodes. The following example command creates a remote node and sets up the authkey for that node.

    [root@clusternode1 ~]# pcs cluster node add-remote remote1
    Copy to Clipboard Toggle word wrap

    For a full configuration procedure for remote nodes, see Configuring Pacemaker remote nodes.

  2. Convert the connection you have created to use SSL/TLS certificates by updating the following variables in the etc/sysconfig/pacemaker file on all cluster nodes and Pacemaker remote nodes:

    PCMK_ca_file - The location of a file containing trusted Certificate Authorities, used to verify client or server certificates. This file must be in PEM format and it must allow read permissions to either the hacluster user or the haclient group.

    PCMK_cert_file - The location of a file containing the signed certificate for the server side of the connection. This file must be in PEM format and it must allow read permissions to either the hacluster user or the haclient group.

    PCMK_crl_file (optional) - The location of a Certificate Revocation List file, in PEM format.

    PCMK_key_file - The location of a file containing the private key for the matching PCMK_cert_file, in PEM format. This file must be in PEM format and it must allow read permissions to either the hacluster user or the haclient group.

  3. Optionally, remove any /etc/pacemaker/authkey files from the cluster and remote nodes. Pacemaker uses certificates if certificates are configured, but removing the authkey files ensures that Pacemaker does not use PSK encryption if you have neglected to configure the certificates on a node.

31.2. Configuring KVM guest nodes

A Pacemaker guest node is a virtual guest node running the pacemaker_remote service. The virtual guest node is managed by the cluster.

31.2.1. Guest node resource options

When configuring a virtual machine to act as a guest node, you create a VirtualDomain resource, which manages the virtual machine. For descriptions of the options you can set for a VirtualDomain resource, see the "Resource Options for Virtual Domain Resources" table in Virtual domain resource options.

In addition to the VirtualDomain resource options, metadata options define the resource as a guest node and define the connection parameters. You set these resource options with the pcs cluster node add-guest command. The following table describes these metadata options.

Expand
Table 31.1. Metadata Options for Configuring KVM Resources as Remote Nodes
FieldDefaultDescription

remote-node

<none>

The name of the guest node this resource defines. This both enables the resource as a guest node and defines the unique name used to identify the guest node. WARNING: This value cannot overlap with any resource or node IDs.

remote-port

3121

Configures a custom port to use for the guest connection to pacemaker_remote

remote-addr

The address provided in the pcs host auth command

The IP address or host name to connect to

remote-connect-timeout

60s

Amount of time before a pending guest connection will time out

31.2.2. Integrating a virtual machine as a guest node

The following procedure is a high-level summary overview of the steps to perform to have Pacemaker launch a virtual machine and to integrate that machine as a guest node, using libvirt and KVM virtual guests.

Procedure

  1. Configure the VirtualDomain resources.
  2. Enter the following commands on every virtual machine to install pacemaker_remote packages, start the pcsd service and enable it to run on startup, and allow TCP port 3121 through the firewall.

    # dnf install pacemaker-remote resource-agents pcs
    # systemctl start pcsd.service
    # systemctl enable pcsd.service
    # firewall-cmd --add-port 3121/tcp --permanent
    # firewall-cmd --add-port 2224/tcp --permanent
    # firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  3. Give each virtual machine a static network address and unique host name, which should be known to all nodes.
  4. If you have not already done so, authenticate pcs to the node you will be integrating as a quest node.

    # pcs host auth nodename
    Copy to Clipboard Toggle word wrap
  5. Use the following command to convert an existing VirtualDomain resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. In addition to converting the resource, this command copies the /etc/pacemaker/authkey to the guest node and starts and enables the pacemaker_remote daemon on the guest node. The node name for the guest node, which you can define arbitrarily, can differ from the host name for the node.

    # pcs cluster node add-guest nodename resource_id [options]
    Copy to Clipboard Toggle word wrap
  6. After creating the VirtualDomain resource, you can treat the guest node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the guest node as in the following commands, which are run from a cluster node. You can include guest nodes in groups, which allows you to group a storage device, file system, and VM.

    # pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
    # pcs constraint location webserver prefers nodename
    Copy to Clipboard Toggle word wrap

31.3. Configuring Pacemaker remote nodes

A remote node is defined as a cluster resource with ocf:pacemaker:remote as the resource agent. You create this resource with the pcs cluster node add-remote command.

31.3.1. Remote node resource options

The following table describes the resource options you can configure for a remote resource.

Expand
Table 31.2. Resource Options for Remote Nodes
FieldDefaultDescription

reconnect_interval

0

Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval.

server

Address specified with pcs host auth command

Server to connect to. This can be an IP address or host name.

port

 

TCP port to connect to.

31.3.2. Remote node configuration overview

The following procedure provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment.

Procedure

  1. On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall.

    # firewall-cmd --permanent --add-service=high-availability
    success
    # firewall-cmd --reload
    success
    Copy to Clipboard Toggle word wrap
    Note

    If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports: TCP ports 2224 and 3121.

  2. Install the pacemaker_remote daemon on the remote node.

    # dnf install -y pacemaker-remote resource-agents pcs
    Copy to Clipboard Toggle word wrap
  3. Start and enable pcsd on the remote node.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
    Copy to Clipboard Toggle word wrap
  4. If you have not already done so, authenticate pcs to the node you will be adding as a remote node.

    # pcs host auth remote1
    Copy to Clipboard Toggle word wrap
  5. Add the remote node resource to the cluster with the following command. This command also syncs all relevant configuration files to the new node, starts the node, and configures it to start pacemaker_remote on boot. This command must be run on a cluster node and not on the remote node which is being added.

    # pcs cluster node add-remote remote1
    Copy to Clipboard Toggle word wrap
  6. After adding the remote resource to the cluster, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.

    # pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
    # pcs constraint location webserver prefers remote1
    Copy to Clipboard Toggle word wrap
    Warning

    Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint.

  7. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.

31.4. Changing the default port location

If you need to change the default port location for either Pacemaker or pacemaker_remote, you can set the PCMK_remote_port environment variable that affects both of these daemons. This environment variable can be enabled by placing it in the /etc/sysconfig/pacemaker file as follows.

\#==#==# Pacemaker Remote
...
#
# Specify a custom port for Pacemaker Remote connections
PCMK_remote_port=3121
Copy to Clipboard Toggle word wrap

When changing the default port used by a particular guest node or remote node, the PCMK_remote_port variable must be set in that node’s /etc/sysconfig/pacemaker file, and the cluster resource creating the guest node or remote node connection must also be configured with the same port number (using the remote-port metadata option for guest nodes, or the port option for remote nodes).

31.5. Upgrading systems with pacemaker_remote nodes

If the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource’s monitor timeout, the cluster will consider the monitor operation as failed.

If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote.

Procedure

  1. Stop the node’s connection resource with the pcs resource disable resourcename command, which will move all services off the node. The connection resource would be the ocf:pacemaker:remote resource for a remote node or, commonly, the ocf:heartbeat:VirtualDomain resource for a guest node. For guest nodes, this command will also stop the VM, so the VM must be started outside the cluster (for example, using virsh) to perform any maintenance.

    pcs resource disable resourcename
    Copy to Clipboard Toggle word wrap
  2. Perform the required maintenance.
  3. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable command.

    pcs resource enable resourcename
    Copy to Clipboard Toggle word wrap

On RHEL 9, the fence-remote-without-quorum cluster property is set to true by default. This means a remote node is fenced when its managing partition loses quorum. You can configure the cluster to prevent this fencing by setting the property to false.

Prerequisites

  • A running RHEL High-Availability cluster with at least three cluster nodes and one remote node.
  • A resource configured to run on the remote node.
  • Fencing configured for all nodes.

Procedure

  1. Ensure the cluster is using the default original behavior:

    1. Set the no-quorum-policy property to freeze.
    2. Set the fence-remote-without-quorum property to true.

      Note

      On RHEL 9, true is the default value. This step ensures the cluster is in the standard state before testing starts.

      [root@node1 ~]# pcs property set no-quorum-policy=freeze
      
      [root@node1 ~]# pcs property set fence-remote-without-quorum=true
      Copy to Clipboard Toggle word wrap
  2. Isolate one of the cluster nodes from the other nodes. In this example, hvirt-325 is the node that manages the remote node hvirt-292:

    [root@hvirt-325 ~]# iptables -I INPUT -p udp --dport=5405 -j DROP
    Copy to Clipboard Toggle word wrap
  3. Observe the cluster’s behavior.

    • Check the logs on the isolated node. You’ll see that the node initiates fencing for the remote node.
    • Check the fencing history for the cluster. You’ll see successful fencing actions for both the isolated cluster node and the remote node.

      Example Log Output on Isolated Node

      Jul 14 09:28:15 hvirt-325 pacemaker-schedulerd[14537]: notice: We can fence hvirt-292 without quorum...
      Jul 14 09:28:15 hvirt-325 pacemaker-schedulerd[14537]: warning: Scheduling node hvirt-292 for fencing ...
      Jul 14 09:28:19 hvirt-325 pacemaker-fenced[14534]: notice: Operation 'reboot' targeting hvirt-292 by hvirt-325...: OK (Done)
      Copy to Clipboard Toggle word wrap

  4. Rejoin the isolated node to the cluster and ensure all nodes and resources are online:

    [root@hvirt-325 ~]# iptables -D INPUT -p udp --dport=5405 -j DROP
    Copy to Clipboard Toggle word wrap
  5. Configure the cluster with the new default behavior. Set the fence-remote-without-quorum property to false:

    [root@node1 ~]# pcs property set fence-remote-without-quorum=false
    Copy to Clipboard Toggle word wrap
  6. Repeat the network isolation test from step 2:

    [root@hvirt-325 ~]# iptables -I INPUT -p udp --dport=5405 -j DROP
    Copy to Clipboard Toggle word wrap

Verification

  • Check the cluster status and logs after isolating the node with fence-remote-without-quorum=false.
  • The logs on the isolated node now show that the remote node is not fenced.

    Example Log Output on Isolated Node (with fix)

    Jul 14 11:55:30 hvirt-325 pacemaker-schedulerd[2934]: warning: Node hvirt-292 is unclean but cannot be fenced
    Copy to Clipboard Toggle word wrap

Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début