Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 65. Configuring cluster quorum
A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum
service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information on the configuration and operation of the votequorum
service, see the votequorum
(5) man page.
65.1. Configuring quorum options
There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup
command. The following table summarizes these options.
Option | Description |
---|---|
|
When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the
The
The |
| When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time.
The
The |
|
When enabled, the cluster can dynamically recalculate |
|
The time, in milliseconds, to wait before recalculating |
For further information about configuring and using these options, see the votequorum
(5) man page.
65.2. Modifying quorum options
You can modify general quorum options for your cluster with the pcs quorum update
command. You can modify the quorum.two_node
and quorum.expected_votes
options on a running system. For all other quorum options, executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum
(5) man page.
The format of the pcs quorum update
command is as follows.
pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[time-in-ms] [wait_for_all=[0|1]]
The following series of commands modifies the wait_for_all
quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running.
[root@node1:~]# pcs quorum update wait_for_all=1 Checking corosync is not running on nodes... Error: node1: corosync is running Error: node2: corosync is running [root@node1:~]# pcs cluster stop --all node2: Stopping Cluster (pacemaker)... node1: Stopping Cluster (pacemaker)... node1: Stopping Cluster (corosync)... node2: Stopping Cluster (corosync)... [root@node1:~]# pcs quorum update wait_for_all=1 Checking corosync is not running on nodes... node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes... node1: Succeeded node2: Succeeded [root@node1:~]# pcs quorum config Options: wait_for_all: 1
65.3. Displaying quorum configuration and status
Once a cluster is running, you can enter the following cluster quorum commands to display the quorum configuration and status.
The following command shows the quorum configuration.
pcs quorum [config]
The following command shows the quorum runtime status.
pcs quorum status
65.4. Running inquorate clusters
If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes
parameter for the live cluster with the pcs quorum expected-votes
command. This allows the cluster to continue operation when it does not have quorum.
Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all
parameter is enabled.
The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes
is reset to the value in the configuration file in the event of a reload.
pcs quorum expected-votes votes
In a situation in which you know that the cluster is inquorate but you want the cluster to proceed with resource management, you can use the pcs quorum unblock
command to prevent the cluster from waiting for all nodes when establishing quorum.
This command should be used with extreme caution. Before issuing this command, it is imperative that you ensure that nodes that are not currently in the cluster are switched off and have no access to shared resources.
# pcs quorum unblock