Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Configuring a quorum device in the cluster
We recommend that you configure a qdevice in your cluster for improved service resiliency. Alternatively you can configure a dedicated cluster node that only serves for adding a quorum vote.
Do not configure both a qdevice and a majority-maker node in the same cluster. This adds one vote per method and results in an even number of quorum votes again.
6.1. Configuring a qdevice for cluster quorum Copiar o linkLink copiado para a área de transferência!
If you prefer to use a dedicated majority-maker cluster node for this purpose, skip the qdevice setup and follow the steps in Configuring a majority-maker node for cluster quorum instead.
6.1.1. Preparing the quorum device host Copiar o linkLink copiado para a área de transferência!
- Configure the RHEL High Availability repository on the quorum device host.
-
If the
firewalldservice is installed and you are not using it on your hosts, disable the service. See Disabling the firewalld service.
6.1.2. Configuring a qdevice on a quorum device host Copiar o linkLink copiado para a área de transferência!
At first you must configure a quorum device host that serves the qdevice for your cluster quorum.
In the following steps the example name of the qdevice host is dc3qdevice.
Prerequisites
- You have installed a separate host that is ideally located in a different location or availability zone than your cluster nodes.
- You have configured the RHEL High Availability repository on the dedicated quorum device host.
- You have configured your network in a way that your cluster nodes can reach the quorum device host.
Procedure
Install
pcsandcorosync-qnetdon the quorum device host:[root]# dnf install pcs corosync-qnetdStart and enable the
pcsdservice on the quorum device host:[root]# systemctl enable --now pcsd.serviceCreate the qdevice on the quorum device host. This command configures and starts the quorum device model
netand configures the device to start on boot. Run this command on the quorum device host:[root]# pcs qdevice setup model net --enable --start Quorum device 'net' initialized quorum device enabled Starting quorum device... quorum device startedOptional: If you are running the
firewalldservice, enable the ports that are required by the Red Hat High Availability Add-On. Run this on the quorum device host:[root]# firewall-cmd --add-service=high-availability [root]# firewall-cmd --runtime-to-permanentSet a password for the user
haclusteron the quorum device host:[root]# passwd hacluster
Verification
Check the quorum device status on the quorum device host:
[root]# pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytes
6.1.3. Configuring a qdevice in the cluster Copiar o linkLink copiado para a área de transferência!
Prerequisites
-
You have configured a quorum device host that is ideally located in a different location or availability zone than your cluster nodes, for example,
dc3qdevice. - You have configured a qdevice on the quorum device host.
- You have configured your network in a way that your cluster nodes can reach the quorum device host.
Procedure
Add the qdevice host to the
/etc/hostson all existing cluster nodes, so that the resulting/etc/hostsentries are the same on all nodes. This ensures that the cluster nodes can communicate with the host even when the DNS services failed:[root]# cat /etc/hosts ... 192.168.100.101 dc1hana1.example.com dc1hana1 192.168.100.102 dc1hana2.example.com dc1hana2 192.168.100.103 dc2hana1.example.com dc2hana1 192.168.100.104 dc2hana2.example.com dc2hana2 192.168.100.120 dc3qdevice.example.com dc3qdeviceInstall
corosync-qdeviceon all nodes of your cluster:[root]# dnf install corosync-qdeviceAuthenticate the quorum device host in the cluster to enable communication. Run this command on one cluster node:
[root]# pcs host auth <qdevice_host> Username: hacluster Password: dc3qdevice: Authorized-
Replace
<qdevice_host>with the name of your quorum device host, for example,dc3qdevice.
-
Replace
Add the qdevice from the quorum device host to the cluster. Run this command on one cluster node:
[root]# pcs quorum device add model net host=<qdevice_host> algorithm=ffsplit Setting up qdevice certificates on nodes... dc2hana1: Succeeded dc1hana2: Succeeded dc2hana2: Succeeded dc1hana1: Succeeded Enabling corosync-qdevice... dc2hana1: corosync-qdevice enabled dc1hana1: corosync-qdevice enabled dc1hana2: corosync-qdevice enabled dc2hana2: corosync-qdevice enabled Sending updated corosync.conf to nodes... dc2hana1: Succeeded dc1hana1: Succeeded dc1hana2: Succeeded dc2hana2: Succeeded dc1hana1: Corosync configuration reloaded Starting corosync-qdevice... dc2hana1: corosync-qdevice started dc1hana2: corosync-qdevice started dc1hana1: corosync-qdevice started dc2hana2: corosync-qdevice started-
Replace
<qdevice_host>with the name of your quorum device host, for example,dc3qdevice. -
The algorithm can be
ffsplitorlms. Consult thecorosync-qdevice(8)man page for more details about the different algorithms.
-
Replace
Verification
Check the quorum configuration on a cluster node:
[root]# pcs quorum config Device: votes: 1 Model: net algorithm: ffsplit host: dc3qdeviceCheck the quorum status on a cluster node:
[root]# pcs quorum status Quorum information ------------------ Date: Wed Sep 3 14:01:24 2025 Quorum provider: corosync_votequorum Nodes: 4 Node ID: 1 Ring ID: 1.180 Quorate: Yes Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 5 Quorum: 3 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW dc1hana1 (local) 2 1 A,V,NMW dc1hana2 3 1 A,V,NMW dc2hana1 4 1 A,V,NMW dc2hana2 0 1 QdeviceCheck the quorum device status on a cluster node:
[root]# pcs quorum device status Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 2 Node ID = 3 3 Node ID = 4 Membership node list: 1, 2, 3, 4 Qdevice-net information ---------------------- Cluster name: hana-scaleout-cluster QNetd host: dc3qdevice:5403 Algorithm: Fifty-Fifty split Tie-breaker: Node with lowest node ID State: ConnectedCheck the quorum device status on the quorum device host. The status now shows the details of the cluster on which the qdevice is used. If the same qdevice is configured in multiple clusters, the status contains the details for each cluster:
[root]# pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 4 Connected clusters: 1 Maximum send/receive size: 32768/32768 bytes Cluster "hana-scaleout-cluster": Algorithm: Fifty-Fifty split (KAP Tie-breaker) Tie-breaker: Node with lowest node ID Node ID 1: Client address: ::ffff:10.99.30.121:53312 HB interval: 8000ms Configured node list: 1, 2, 3, 4 Ring ID: 1.180 Membership node list: 1, 2, 3, 4 Heuristics: Undefined (membership: Undefined, regular: Undefined) TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 2: Client address: ::ffff:10.99.30.57:63610 HB interval: 8000ms Configured node list: 1, 2, 3, 4 Ring ID: 1.180 Membership node list: 1, 2, 3, 4 Heuristics: Undefined (membership: Undefined, regular: Undefined) TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 3: Client address: ::ffff:10.99.30.120:26758 HB interval: 8000ms Configured node list: 1, 2, 3, 4 Ring ID: 1.180 Membership node list: 1, 2, 3, 4 Heuristics: Undefined (membership: Undefined, regular: Undefined) TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 4: Client address: ::ffff:10.99.30.158:54932 HB interval: 8000ms Configured node list: 1, 2, 3, 4 Ring ID: 1.180 Membership node list: 1, 2, 3, 4 Heuristics: Undefined (membership: Undefined, regular: Undefined) TLS active: Yes (client certificate verified) Vote: No change (ACK)
6.2. Configuring a majority-maker node for cluster quorum Copiar o linkLink copiado para a área de transferência!
You can use an additional cluster node to serve as an extra quorum vote. In the following steps we configure such a majority-maker node dc3mm in the existing cluster.
6.2.1. Preparing the majority-maker node Copiar o linkLink copiado para a área de transferência!
- Install the node with the same operating system version as your HANA nodes.
- Configure the RHEL High Availability repository on the majority-maker node.
-
If the
firewalldservice is installed and you are not using it on your hosts, disable the service. See Disabling the firewalld service.
6.2.2. Updating the host names in /etc/hosts Copiar o linkLink copiado para a área de transferência!
As for all cluster nodes, we recommend that you also add the majority-maker cluster node to the /etc/hosts file on each node. On the new dc3mm node you add all cluster nodes.
Procedure
Add the new host to the
/etc/hostson all existing cluster nodes and add all hosts to the newdc3mmhost, so that the resulting/etc/hostsentries are the same on all nodes:[root]# cat /etc/hosts ... 192.168.100.101 dc1hana1.example.com dc1hana1 192.168.100.102 dc1hana2.example.com dc1hana2 192.168.100.103 dc2hana1.example.com dc2hana1 192.168.100.104 dc2hana2.example.com dc2hana2 192.168.100.110 dc3mm.example.com dc3mm
Verification
Check that you can ping the hosts between each other. This step is optional and an example only for a basic verification. The system resolves entries in
/etc/hostswhen you use thepingcommand:[root]# ping dc3mm.example.com PING dc3mm.example.com (192.168.100.110) 56(84) bytes of data. 64 bytes from dc3mm.example.com (192.168.100.110): icmp_seq=1 ttl=64 time=0.017 ms …
6.2.3. Updating the cluster clone resources Copiar o linkLink copiado para a área de transferência!
The cluster automatically creates one copy of a clone resource for each cluster node and uses this to calculate resource allocations. For example, if the cluster consists of 4 HANA nodes and you add an additional majority-maker node for a quorum vote only, the cluster automatically calculates with all 5 cluster members for clone resource assignments. However, including this non-HANA node in the calculations can lead to unexpected impact when the cluster moves resources.
To prevent this influence you must configure the clone-max limit for all cloned resources explicitly to only the number of HANA nodes. Adjust the clone configuration before you add the new node to the cluster.
Procedure
Update the
SAPHanaTopologyresource clone with a limit to the number of HANA nodes, for example,4:[root]# pcs resource update cln_SAPHanaTop_<SID>_HDB<instance> meta clone-max=4Update the
SAPHanaControllerresource clone with a limit to the number of HANA nodes, for example,4:[root]# pcs resource update cln_SAPHanaCon_<SID>_HDB<instance> meta clone-max=4Optional: If you have configured the
SAPHanaFilesystemresource clone, then also update it with a limit to the number of HANA nodes, for example,4:[root]# pcs resource update cln_SAPHanaFil_<SID>_HDB<instance> meta clone-max=4
Verification
Check that the
clone-maxoption is correct for all clone resources:[root]# pcs resource config | grep -i clone Clone: cln_SAPHanaTop_RH1_HDB02 clone-max=4 clone-node-max=1 Clone: cln_SAPHanaCon_RH1_HDB02 clone-max=4 clone-node-max=1 Clone: cln_SAPHanaFil_RH1_HDB02 clone-max=4 clone-node-max=1-
clone-maxmust be the number of HANA nodes, for example,4. When this option is not displayed, it defaults to the total number of cluster nodes instead.
-
6.2.4. Installing the cluster components on the majority-maker node Copiar o linkLink copiado para a área de transferência!
Install the same cluster packages as on the existing cluster nodes to prepare the host in the same way.
Prerequisites
- You have configured the RHEL High Availability repository on the majority-maker host.
Procedure
Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose the same fence agents as you are using on the existing cluster nodes:
[root]# dnf install pcs pacemaker fence-agents-<model>Start and enable the
pcsdservice on the new nodes. The--nowparameter automatically starts the enabled service:[root]# systemctl enable --now pcsd.serviceOptional: If you use the local
firewalldservice you must enable the ports that are required by the Red Hat High Availability Add-On. Run this on the new node:[root]# firewall-cmd --add-service=high-availability [root]# firewall-cmd --runtime-to-permanentSet a password for the user
haclusteron the new node using the same password:[root]# passwd hacluster
Verification
Check that the pcsd service is running and shows as
loadedandactiveon the new node:[root]# systemctl status pcsd.service ● pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; preset: disabled) Active: active (running) since … …
6.2.5. Adding the new node to the cluster Copiar o linkLink copiado para a área de transferência!
Add the dedicated majority-maker node as a regular cluster member.
Prerequisites
- You have configured a cluster to which you want to add this node as a member.
Procedure
Authenticate the user
haclusterfor the new node in the cluster. Run this on one cluster node:[root]# pcs host auth dc3mm Username: hacluster Password: dc3mm: Authorized-
Enter the node name with or without FQDN, as defined in the
/etc/hostsfile. -
Enter the
haclusteruser password in the prompt.
-
Enter the node name with or without FQDN, as defined in the
Add the new node to the cluster. Run this on one cluster node:
[root]# pcs cluster node add dc3mm No addresses specified for host 'dc3mm', using 'dc3mm' Disabling sbd... dc3mm: sbd disabled Sending 'corosync authkey', 'pacemaker authkey' to 'dc3mm' dc3mm: successful distribution of the file 'corosync authkey' dc3mm: successful distribution of the file 'pacemaker authkey' Sending updated corosync.conf to nodes... dc2hana1: Succeeded dc3mm: Succeeded dc1hana2: Succeeded dc2hana2: Succeeded dc1hana1: Succeeded dc2hana1: Corosync configuration reloadedAdd a location constraint to prevent any HANA resource from running on this node, and also prevent the cluster from trying to check the initial resource status. The following constraint definition uses a
regexpexpression to match all HANA resources. If required, adjust the pattern to match your resource names:[root]# pcs constraint location add avoid-dc3mm \ regexp%.*SAPHana.* dc3mm -- -INFINITY resource-discovery=neverEnable the cluster on the new node to be started automatically on system start. Run on any node:
[root]# pcs cluster enable dc3mm dc3mm: Cluster EnabledStart the cluster on the new node:
[root]# pcs cluster start dc3mm dc3mm: Starting Cluster...
Verification
Check the location constraint that keeps HANA resources off the new node:
[root]# pcs constraint location --full Location Constraints: … resource pattern '.*SAPHana.*' avoids node 'dc3mm' with score INFINITY (id: avoid-dc3mm) resource-discovery=neverCheck the cluster status. Verify that the cluster daemon services are in the desired state. Run this on the new node to also verify the local daemon status at the end:
[root]# pcs status --full … Node List: * Node dc1hana1 (1): online, feature set 3.19.6 * Node dc1hana2 (2): online, feature set 3.19.6 * Node dc2hana1 (3): online, feature set 3.19.6 * Node dc2hana2 (4): online, feature set 3.19.6 * Node dc3mm (5): online, feature set 3.19.6 Full List of Resources: * Clone Set: cln_SAPHanaTop_RH1_HDB02 [rsc_SAPHanaTop_RH1_HDB02]: * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc1hana2 * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc1hana1 * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc2hana1 * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc2hana2 * Clone Set: cln_SAPHanaCon_RH1_HDB02 [rsc_SAPHanaCon_RH1_HDB02] (promotable): * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc1hana2 * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Promoted dc1hana1 * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc2hana1 * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc2hana2 * Clone Set: cln_SAPHanaFil_RH1_HDB02 [rsc_SAPHanaFil_RH1_HDB02]: * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc1hana2 * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc1hana1 * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc2hana1 * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc2hana2 * rsc_vip_RH1_HDB02_primary (ocf:heartbeat:IPAddr2): Started dc1hana1 * rsc_vip_RH1_HDB02_readonly (ocf:heartbeat:IPAddr2): Started dc2hana1 … PCSD Status: dc1hana1: Online dc1hana2: Online dc2hana1: Online dc2hana2: Online dc3mm: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled-
The new node must be displayed in the
Node Listand in thePCSD Statuslist. -
The new node must not show a status in the
Full List of Resources.
-
The new node must be displayed in the
Verify the quorum status. The new node adds a vote, so that there is now an odd number of votes and there can be no equal 50/50 split anymore:
[root]# pcs quorum status Quorum information ------------------ Date: Tue Sep 2 14:56:29 2025 Quorum provider: corosync_votequorum Nodes: 5 Node ID: 5 Ring ID: 1.27 Quorate: Yes Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 5 Quorum: 3 Flags: Quorate Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 NR dc1hana1 2 1 NR dc1hana2 3 1 NR dc2hana1 4 1 NR dc2hana2 5 1 NR dc3mm (local)
Next steps
- Add the new node to your individual fencing method. See Configuring fencing in a Red Hat High Availability cluster.