Este conteúdo não está disponível no idioma selecionado.
8.4.6. Configuration Overview: Remote Node
This section provides a high-level summary overview of the steps to perform to configure a Pacemaker remote node and to integrate that node into an existing Pacemaker cluster environment.
- On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall.
firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability success # firewall-cmd --reload successCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you are usingiptablesdirectly, or some other firewall solution besidesfirewalld, simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405. - Install the
pacemaker_remotedaemon on the remote node.yum install -y pacemaker-remote resource-agents pcs
# yum install -y pacemaker-remote resource-agents pcsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - All nodes (both cluster nodes and remote nodes) must have the same authentication key installed for the communication to work correctly. If you already have a key on an existing node, use that key and copy it to the remote node. Otherwise, create a new key on the remote node.Run the following set of commands on the remote node to create a directory for the authentication key with secure permissions.
mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker
# mkdir -p --mode=0750 /etc/pacemaker # chgrp haclient /etc/pacemakerCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following command shows one method to create an encryption key on the remote node.dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start and enable the
pacemaker_remotedaemon on the remote node.systemctl enable pacemaker_remote.service systemctl start pacemaker_remote.service
# systemctl enable pacemaker_remote.service # systemctl start pacemaker_remote.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - On the cluster node, create a location for the shared authentication key with the same path as the authentication key on the remote node and copy the key into that directory. In this example, the key is copied from the remote node where the key was created.
mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkey
# mkdir -p --mode=0750 /etc/pacemaker # chgrp haclient /etc/pacemaker # scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkeyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command from a cluster node to create a
remoteresource. In this case the remote node isremote1.pcs resource create remote1 ocf:pacemaker:remote
# pcs resource create remote1 ocf:pacemaker:remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After creating the
remoteresource, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1
# pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s # pcs constraint location webserver prefers remote1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Warning
Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. - Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.