Questo contenuto non è disponibile nella lingua selezionata.
Chapter 65. Integrating non-corosync nodes into a cluster: the pacemaker_remote service
				The pacemaker_remote service allows nodes not running corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes.
			
				Among the capabilities that the pacemaker_remote service provides are the following:
			
- 
						The pacemaker_remoteservice allows you to scale beyond the Red Hat support limit of 32 nodes for RHEL 8.1.
- 
						The pacemaker_remoteservice allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources.
				The following terms are used to describe the pacemaker_remote service.
			
- 
						cluster node - A node running the High Availability services (pacemakerandcorosync).
- 
						remote node - A node running pacemaker_remoteto remotely integrate into the cluster without requiringcorosynccluster membership. A remote node is configured as a cluster resource that uses theocf:pacemaker:remoteresource agent.
- 
						guest node - A virtual guest node running the pacemaker_remoteservice. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node.
- 
						pacemaker_remote - A service daemon capable of performing remote application management within remote nodes and KVM guest nodes in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker’s local executor daemon (pacemaker-execd) that is capable of managing resources remotely on a node not running corosync.
				A Pacemaker cluster running the pacemaker_remote service has the following characteristics.
			
- 
						Remote nodes and guest nodes run the pacemaker_remoteservice (with very little configuration required on the virtual machine side).
- 
						The cluster stack (pacemakerandcorosync), running on the cluster nodes, connects to thepacemaker_remoteservice on the remote nodes, allowing them to integrate into the cluster.
- 
						The cluster stack (pacemakerandcorosync), running on the cluster nodes, launches the guest nodes and immediately connects to thepacemaker_remoteservice on the guest nodes, allowing them to integrate into the cluster.
The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations:
- they do not take place in quorum
- they do not execute fencing device actions
- they are not eligible to be the cluster’s Designated Controller (DC)
- 
						they do not themselves run the full range of pcscommands
On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack.
				Other than these noted limitations, the remote and guest nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do.
			
65.1. Host and guest authentication of pacemaker_remote nodes
					The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running pacemaker_remote must share the same private key. By default this key must be placed at /etc/pacemaker/authkey on both cluster nodes and remote nodes.
				
					The first time you run the pcs cluster node add-guest command or the pcs cluster node add-remote command, it creates the authkey and installs it on all existing nodes in the cluster. When you later create a new node of any type, the existing authkey is copied to the new node.
				
65.2. Configuring KVM guest nodes
					A Pacemaker guest node is a virtual guest node running the pacemaker_remote service. The virtual guest node is managed by the cluster.
				
65.2.1. Guest node resource options
						When configuring a virtual machine to act as a guest node, you create a VirtualDomain resource, which manages the virtual machine. For descriptions of the options you can set for a VirtualDomain resource, see the "Resource Options for Virtual Domain Resources" table in Virtual domain resource options.
					
						In addition to the VirtualDomain resource options, metadata options define the resource as a guest node and define the connection parameters. You set these resource options with the pcs cluster node add-guest command. The following table describes these metadata options.
					
| Field | Default | Description | 
|---|---|---|
| 
										 | <none> | The name of the guest node this resource defines. This both enables the resource as a guest node and defines the unique name used to identify the guest node. WARNING: This value cannot overlap with any resource or node IDs. | 
| 
										 | 3121 | 
										Configures a custom port to use for the guest connection to  | 
| 
										 | 
										The address provided in the  | The IP address or host name to connect to | 
| 
										 | 60s | Amount of time before a pending guest connection will time out | 
65.2.2. Integrating a virtual machine as a guest node
						The following procedure is a high-level summary overview of the steps to perform to have Pacemaker launch a virtual machine and to integrate that machine as a guest node, using libvirt and KVM virtual guests.
					
Procedure
- 
								Configure the VirtualDomainresources.
- Enter the following commands on every virtual machine to install - pacemaker_remotepackages, start the- pcsdservice and enable it to run on startup, and allow TCP port 3121 through the firewall.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Give each virtual machine a static network address and unique host name, which should be known to all nodes.
- If you have not already done so, authenticate - pcsto the node you will be integrating as a quest node.- pcs host auth nodename - # pcs host auth nodename- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Use the following command to convert an existing - VirtualDomainresource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. In addition to converting the resource, this command copies the- /etc/pacemaker/authkeyto the guest node and starts and enables the- pacemaker_remotedaemon on the guest node. The node name for the guest node, which you can define arbitrarily, can differ from the host name for the node.- pcs cluster node add-guest nodename resource_id [options] - # pcs cluster node add-guest nodename resource_id [options]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After creating the - VirtualDomainresource, you can treat the guest node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the guest node as in the following commands, which are run from a cluster node. You can include guest nodes in groups, which allows you to group a storage device, file system, and VM.- pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers nodename - # pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s # pcs constraint location webserver prefers nodename- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
65.3. Configuring Pacemaker remote nodes
					A remote node is defined as a cluster resource with ocf:pacemaker:remote as the resource agent. You create this resource with the pcs cluster node add-remote command.
				
65.3.1. Remote node resource options
						The following table describes the resource options you can configure for a remote resource.
					
| Field | Default | Description | 
|---|---|---|
| 
										 | 0 | Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval. | 
| 
										 | 
										Address specified with  | Server to connect to. This can be an IP address or host name. | 
| 
										 | TCP port to connect to. | 
65.3.2. Remote node configuration overview
The following procedure provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment.
Procedure
- On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall. - firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload - # firewall-cmd --permanent --add-service=high-availability success # firewall-cmd --reload success- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- If you are using - iptablesdirectly, or some other firewall solution besides- firewalld, simply open the following ports: TCP ports 2224 and 3121.
- Install the - pacemaker_remotedaemon on the remote node.- yum install -y pacemaker-remote resource-agents pcs - # yum install -y pacemaker-remote resource-agents pcs- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Start and enable - pcsdon the remote node.- systemctl start pcsd.service systemctl enable pcsd.service - # systemctl start pcsd.service # systemctl enable pcsd.service- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- If you have not already done so, authenticate - pcsto the node you will be adding as a remote node.- pcs host auth remote1 - # pcs host auth remote1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Add the remote node resource to the cluster with the following command. This command also syncs all relevant configuration files to the new node, starts the node, and configures it to start - pacemaker_remoteon boot. This command must be run on a cluster node and not on the remote node which is being added.- pcs cluster node add-remote remote1 - # pcs cluster node add-remote remote1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After adding the - remoteresource to the cluster, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.- pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1 - # pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s # pcs constraint location webserver prefers remote1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Warning- Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. 
- Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.
65.4. Changing the default port location
					If you need to change the default port location for either Pacemaker or pacemaker_remote, you can set the PCMK_remote_port environment variable that affects both of these daemons. This environment variable can be enabled by placing it in the /etc/sysconfig/pacemaker file as follows.
				
\#==#==# Pacemaker Remote ... # # Specify a custom port for Pacemaker Remote connections PCMK_remote_port=3121
\#==#==# Pacemaker Remote
...
#
# Specify a custom port for Pacemaker Remote connections
PCMK_remote_port=3121
					When changing the default port used by a particular guest node or remote node, the PCMK_remote_port variable must be set in that node’s /etc/sysconfig/pacemaker file, and the cluster resource creating the guest node or remote node connection must also be configured with the same port number (using the remote-port metadata option for guest nodes, or the port option for remote nodes).
				
65.5. Upgrading systems with pacemaker_remote nodes
					If the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource’s monitor timeout, the cluster will consider the monitor operation as failed.
				
					If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote.
				
Procedure
- Stop the node’s connection resource with the - pcs resource disable resourcenamecommand, which will move all services off the node. The connection resource would be the- ocf:pacemaker:remoteresource for a remote node or, commonly, the- ocf:heartbeat:VirtualDomainresource for a guest node. For guest nodes, this command will also stop the VM, so the VM must be started outside the cluster (for example, using- virsh) to perform any maintenance.- pcs resource disable resourcename - pcs resource disable resourcename- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Perform the required maintenance.
- When ready to return the node to the cluster, re-enable the resource with the - pcs resource enablecommand.- pcs resource enable resourcename - pcs resource enable resourcename- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow