Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 62. Managing cluster nodes
There are a variety of pcs
commands you can use to manage cluster nodes, including commands to start and stop cluster services and to add and remove cluster nodes.
62.1. Stopping cluster services
The following command stops cluster services on the specified node or nodes. As with the pcs cluster start
, the --all
option stops cluster services on all nodes and if you do not specify any nodes, cluster services are stopped on the local node only.
pcs cluster stop [--all | node] [...]
You can force a stop of cluster services on the local node with the following command, which performs a kill -9
command.
pcs cluster kill
62.2. Enabling and disabling cluster services
Enable the cluster services with the following command. This configures the cluster services to run on startup on the specified node or nodes.
Enabling allows nodes to automatically rejoin the cluster after they have been fenced, minimizing the time the cluster is at less than full strength. If the cluster services are not enabled, an administrator can manually investigate what went wrong before starting the cluster services manually, so that, for example, a node with hardware issues in not allowed back into the cluster when it is likely to fail again.
-
If you specify the
--all
option, the command enables cluster services on all nodes. - If you do not specify any nodes, cluster services are enabled on the local node only.
pcs cluster enable [--all | node] [...]
Use the following command to configure the cluster services not to run on startup on the specified node or nodes.
-
If you specify the
--all
option, the command disables cluster services on all nodes. - If you do not specify any nodes, cluster services are disabled on the local node only.
pcs cluster disable [--all | node] [...]
62.3. Adding cluster nodes
Add a new node to an existing cluster with the following procedure.
This procedure adds standard clusters nodes running corosync
. For information on integrating non-corosync nodes into a cluster, see Integrating non-corosync nodes into a cluster: the pacemaker_remote service.
It is recommended that you add nodes to existing clusters only during a production maintenance window. This allows you to perform appropriate resource and deployment testing for the new node and its fencing configuration.
In this example, the existing cluster nodes are clusternode-01.example.com
, clusternode-02.example.com
, and clusternode-03.example.com
. The new node is newnode.example.com
.
Procedure
On the new node to add to the cluster, perform the following tasks.
Install the cluster packages. If the cluster uses SBD, the Booth ticket manager, or a quorum device, you must manually install the respective packages (
sbd
,booth-site
,corosync-qdevice
) on the new node as well.[root@newnode ~]# yum install -y pcs fence-agents-all
In addition to the cluster packages, you will also need to install and configure all of the services that you are running in the cluster, which you have installed on the existing cluster nodes. For example, if you are running an Apache HTTP server in a Red Hat high availability cluster, you will need to install the server on the node you are adding, as well as the
wget
tool that checks the status of the server.If you are running the
firewalld
daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --add-service=high-availability
Set a password for the user ID
hacluster
. It is recommended that you use the same password for each node in the cluster.[root@newnode ~]# passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
Execute the following commands to start the
pcsd
service and to enablepcsd
at system start.# systemctl start pcsd.service # systemctl enable pcsd.service
On a node in the existing cluster, perform the following tasks.
Authenticate user
hacluster
on the new cluster node.[root@clusternode-01 ~]# pcs host auth newnode.example.com Username: hacluster Password: newnode.example.com: Authorized
Add the new node to the existing cluster. This command also syncs the cluster configuration file
corosync.conf
to all nodes in the cluster, including the new node you are adding.[root@clusternode-01 ~]# pcs cluster node add newnode.example.com
On the new node to add to the cluster, perform the following tasks.
Start and enable cluster services on the new node.
[root@newnode ~]# pcs cluster start Starting Cluster... [root@newnode ~]# pcs cluster enable
- Ensure that you configure and test a fencing device for the new cluster node.
62.4. Removing cluster nodes
The following command shuts down the specified node and removes it from the cluster configuration file, corosync.conf
, on all of the other nodes in the cluster.
pcs cluster node remove node
62.5. Adding a node to a cluster with multiple links
When adding a node to a cluster with multiple links, you must specify addresses for all links.
The following example adds the node rh80-node3
to a cluster, specifying IP address 192.168.122.203 for the first link and 192.168.123.203 as the second link.
# pcs cluster node add rh80-node3 addr=192.168.122.203 addr=192.168.123.203
62.6. Adding and modifying links in an existing cluster
As of RHEL 8.1, in most cases, you can add or modify the links in an existing cluster without restarting the cluster.
62.6.1. Adding and removing links in an existing cluster
To add a new link to a running cluster, use the pcs cluster link add
command.
- When adding a link, you must specify an address for each node.
-
Adding and removing a link is only possible when you are using the
knet
transport protocol. - At least one link in the cluster must be defined at any time.
- The maximum number of links in a cluster is 8, numbered 0-7. It does not matter which links are defined, so, for example, you can define only links 3, 6 and 7.
-
When you add a link without specifying its link number,
pcs
uses the lowest link available. -
The link numbers of currently configured links are contained in the
corosync.conf
file. To display thecorosync.conf
file, run thepcs cluster corosync
command or (for RHEL 8.4 and later) thepcs cluster config show
command.
The following command adds link number 5 to a three node cluster.
[root@node1 ~] # pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=5
To remove an existing link, use the pcs cluster link delete
or pcs cluster link remove
command. Either of the following commands will remove link number 5 from the cluster.
[root@node1 ~] # pcs cluster link delete 5 [root@node1 ~] # pcs cluster link remove 5
62.6.2. Modifying a link in a cluster with multiple links
If there are multiple links in the cluster and you want to change one of them, perform the following procedure.
Procedure
Remove the link you want to change.
[root@node1 ~] # pcs cluster link remove 2
Add the link back to the cluster with the updated addresses and options.
[root@node1 ~] # pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=2
62.6.3. Modifying the link addresses in a cluster with a single link
If your cluster uses only one link and you want to modify that link to use different addresses, perform the following procedure. In this example, the original link is link 1.
Add a new link with the new addresses and options.
[root@node1 ~] # pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 node3=10.0.5.31 options linknumber=2
Remove the original link.
[root@node1 ~] # pcs cluster link remove 1
Note that you cannot specify addresses that are currently in use when adding links to a cluster. This means, for example, that if you have a two-node cluster with one link and you want to change the address for one node only, you cannot use the above procedure to add a new link that specifies one new address and one existing address. Instead, you can add a temporary link before removing the existing link and adding it back with the updated address, as in the following example.
In this example:
- The link for the existing cluster is link 1, which uses the address 10.0.5.11 for node 1 and the address 10.0.5.12 for node 2.
- You would like to change the address for node 2 to 10.0.5.31.
Procedure
To update only one of the addresses for a two-node cluster with a single link, use the following procedure.
Add a new temporary link to the existing cluster, using addresses that are not currently in use.
[root@node1 ~] # pcs cluster link add node1=10.0.5.13 node2=10.0.5.14 options linknumber=2
Remove the original link.
[root@node1 ~] # pcs cluster link remove 1
Add the new, modified link.
[root@node1 ~] # pcs cluster link add node1=10.0.5.11 node2=10.0.5.31 options linknumber=1
Remove the temporary link you created
[root@node1 ~] # pcs cluster link remove 2
62.6.4. Modifying the link options for a link in a cluster with a single link
If your cluster uses only one link and you want to modify the options for that link but you do not want to change the address to use, you can add a temporary link before removing and updating the link to modify.
In this example:
- The link for the existing cluster is link 1, which uses the address 10.0.5.11 for node 1 and the address 10.0.5.12 for node 2.
-
You would like to change the link option
link_priority
to 11.
Procedure
Modify the link option in a cluster with a single link with the following procedure.
Add a new temporary link to the existing cluster, using addresses that are not currently in use.
[root@node1 ~] # pcs cluster link add node1=10.0.5.13 node2=10.0.5.14 options linknumber=2
Remove the original link.
[root@node1 ~] # pcs cluster link remove 1
Add back the original link with the updated options.
[root@node1 ~] # pcs cluster link add node1=10.0.5.11 node2=10.0.5.12 options linknumber=1 link_priority=11
Remove the temporary link.
[root@node1 ~] # pcs cluster link remove 2
62.6.5. Modifying a link when adding a new link is not possible
If for some reason adding a new link is not possible in your configuration and your only option is to modify a single existing link, you can use the following procedure, which requires that you shut your cluster down.
Procedure
The following example procedure updates link number 1 in the cluster and sets the link_priority
option for the link to 11.
Stop the cluster services for the cluster.
[root@node1 ~] # pcs cluster stop --all
Update the link addresses and options.
The
pcs cluster link update
command does not require that you specify all of the node addresses and options. Instead, you can specify only the addresses to change. This example modifies the addresses fornode1
andnode3
and thelink_priority
option only.[root@node1 ~] # pcs cluster link update 1 node1=10.0.5.11 node3=10.0.5.31 options link_priority=11
To remove an option, you can set the option to a null value with the
option=
format.Restart the cluster
[root@node1 ~] # pcs cluster start --all
62.7. Configuring a node health strategy
A node might be functioning well enough to maintain its cluster membership and yet be unhealthy in some respect that makes it an undesirable location for resources. For example, a disk drive might be reporting SMART errors, or the CPU might be highly loaded. As of RHEL 8.7, You can use a node health strategy in Pacemaker to automatically move resources off unhealthy nodes.
You can monitor a node’s health with the the following health node resource agents, which set node attributes based on CPU and disk status:
-
ocf:pacemaker:HealthCPU
, which monitors CPU idling -
ocf:pacemaker:HealthIOWait
, which monitors the CPU I/O wait -
ocf:pacemaker:HealthSMART
, which monitors SMART status of a disk drive -
ocf:pacemaker:SysInfo
, which sets a variety of node attributes with local system information and also functions as a health agent monitoring disk space usage
Additionally, any resource agent might provide node attributes that can be used to define a health node strategy.
Procedure
The following procedure configures a health node strategy for a cluster that will move resources off of any node whose CPU I/O wait goes above 15%.
Set the
health-node-strategy
cluster property to define how Pacemaker responds to changes in node health.# pcs property set node-health-strategy=migrate-on-red
Create a cloned cluster resource that uses a health node resource agent, setting the
allow-unhealthy-nodes
resource meta option to define whether the cluster will detect if the node’s health recovers and move resources back to the node. Configure this resource with a recurring monitor action, to continually check the health of all nodes.This example creates a
HealthIOWait
resource agent to monitor the CPU I/O wait, setting a red limit for moving resources off a node to 15%. This command sets theallow-unhealthy-nodes
resource meta option totrue
and configures a recurring monitor interval of 10 seconds.# pcs resource create io-monitor ocf:pacemaker:HealthIOWait red_limit=15 op monitor interval=10s meta allow-unhealthy-nodes=true clone
62.8. Configuring a large cluster with many resources
If the cluster you are deploying consists of a large number of nodes and many resources, you may need to modify the default values of the following parameters for your cluster.
- The
cluster-ipc-limit
cluster property The
cluster-ipc-limit
cluster property is the maximum IPC message backlog before one cluster daemon will disconnect another. When a large number of resources are cleaned up or otherwise modified simultaneously in a large cluster, a large number of CIB updates arrive at once. This could cause slower clients to be evicted if the Pacemaker service does not have time to process all of the configuration updates before the CIB event queue threshold is reached.The recommended value of
cluster-ipc-limit
for use in large clusters is the number of resources in the cluster multiplied by the number of nodes. This value can be raised if you see "Evicting client" messages for cluster daemon PIDs in the logs.You can increase the value of
cluster-ipc-limit
from its default value of 500 with thepcs property set
command. For example, for a ten-node cluster with 200 resources you can set the value ofcluster-ipc-limit
to 2000 with the following command.# pcs property set cluster-ipc-limit=2000
- The
PCMK_ipc_buffer
Pacemaker parameter On very large deployments, internal Pacemaker messages may exceed the size of the message buffer. When this occurs, you will see a message in the system logs of the following format:
Compressed message exceeds X% of configured IPC limit (X bytes); consider setting PCMK_ipc_buffer to X or higher
When you see this message, you can increase the value of
PCMK_ipc_buffer
in the/etc/sysconfig/pacemaker
configuration file on each node. For example, to increase the value ofPCMK_ipc_buffer
from its default value to 13396332 bytes, change the uncommentedPCMK_ipc_buffer
field in the/etc/sysconfig/pacemaker
file on each node in the cluster as follows.PCMK_ipc_buffer=13396332
To apply this change, run the following comand.
# systemctl restart pacemaker