Questo contenuto non è disponibile nella lingua selezionata.

8.4.6. Configuration Overview: Remote Node


This section provides a high-level summary overview of the steps to perform to configure a Pacemaker remote node and to integrate that node into an existing Pacemaker cluster environment.
  1. On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall.
    # firewall-cmd --permanent --add-service=high-availability
    success
    # firewall-cmd --reload
    success
    

    Note

    If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405.
  2. Install the pacemaker_remote daemon on the remote node.
    # yum install -y pacemaker-remote resource-agents pcs
  3. All nodes (both cluster nodes and remote nodes) must have the same authentication key installed for the communication to work correctly. If you already have a key on an existing node, use that key and copy it to the remote node. Otherwise, create a new key on the remote node.
    Run the following set of commands on the remote node to create a directory for the authentication key with secure permissions.
    # mkdir -p --mode=0750 /etc/pacemaker
    # chgrp haclient /etc/pacemaker
    The following command shows one method to create an encryption key on the remote node.
    # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
  4. Start and enable the pacemaker_remote daemon on the remote node.
    # systemctl enable pacemaker_remote.service
    # systemctl start pacemaker_remote.service
  5. On the cluster node, create a location for the shared authentication key with the same path as the authentication key on the remote node and copy the key into that directory. In this example, the key is copied from the remote node where the key was created.
    # mkdir -p --mode=0750 /etc/pacemaker
    # chgrp haclient /etc/pacemaker
    # scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkey
  6. Run the following command from a cluster node to create a remote resource. In this case the remote node is remote1.
    # pcs resource create remote1 ocf:pacemaker:remote
  7. After creating the remote resource, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node.
    # pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
    # pcs constraint location webserver prefers remote1

    Warning

    Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint.
  8. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.