Chapter 5. Configuring the Pacemaker cluster


5.1. Deploying the basic cluster configuration

The following basic cluster setup covers the minimum steps to get started with the Pacemaker cluster setup for managing SAP instances. Apply the steps to include all the nodes according to your planned cluster configuration.

For more information on settings and options for complex configurations, refer to the documentation for the RHEL HA Add-On, for example, Create a high availability cluster with multiple links.

Prerequisites

  • You have set up the HANA system replication environment and verified that it is working correctly.
  • You have configured the RHEL High Availability repository on all systems that are going to be nodes of this cluster.
  • You have verified fencing and quorum requirements according to your planned environment. For more details see HA cluster requirements.

Procedure

  1. Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose which fence agents you want to install and execute the installation on all cluster nodes.

    [root]# dnf install pcs pacemaker fence-agents-<model>
  2. Start and enable the pcsd service on all cluster nodes. The --now parameter automatically starts the enabled service:

    [root]# systemctl enable --now pcsd.service
  3. Optional: If you use the local firewalld service you must enable the ports that are required by the Red Hat High Availability Add-On. Run this on all cluster nodes:

    [root]# firewall-cmd --add-service=high-availability
    [root]# firewall-cmd --runtime-to-permanent
  4. Set a password for the user hacluster. Repeat the command on each node using the same password:

    [root]# passwd hacluster
  5. Authenticate the user hacluster for each node in the cluster. Run this on one node and apply all cluster node names, for example, dc1hana1 to dc2hana4:

    [root]# pcs host auth <node1> … <node8>
    Username: hacluster
    Password:
    dc1hana2: Authorized
    dc2hana1: Authorized
    dc1hana4: Authorized
    dc1hana3: Authorized
    dc2hana2: Authorized
    dc1hana1: Authorized
    dc2hana4: Authorized
    dc2hana3: Authorized
    • Enter the node names with or without FQDN, as defined in the /etc/hosts file.
    • Enter the hacluster user password in the prompt.
  6. Create the cluster with a unique name and provide the names of all cluster members with fully qualified host names. This propagates the cluster configuration on all nodes and starts the cluster with the defined cluster name. Run this command on one node and apply all cluster node names, for example, dc1hana1 to dc2hana4:

    [root]# pcs cluster setup <cluster_name> --start <node1> … <node8>
    No addresses specified for host 'dc1hana1', using 'dc1hana1'
    No addresses specified for host 'dc1hana2', using 'dc1hana2'
    No addresses specified for host 'dc1hana3', using 'dc1hana3'
    No addresses specified for host 'dc1hana4', using 'dc1hana4'
    No addresses specified for host 'dc2hana1', using 'dc2hana1'
    No addresses specified for host 'dc2hana2', using 'dc2hana2'
    No addresses specified for host 'dc2hana3', using 'dc2hana3'
    No addresses specified for host 'dc2hana4', using 'dc2hana4'
    Destroying cluster on hosts: 'dc1hana1', 'dc1hana2', 'dc1hana3', 'dc1hana4', 'dc2hana1', 'dc2hana2', 'dc2hana3', 'dc2hana4'...
    dc1hana2: Successfully destroyed cluster
    dc2hana1: Successfully destroyed cluster
    dc2hana4: Successfully destroyed cluster
    dc2hana2: Successfully destroyed cluster
    dc1hana3: Successfully destroyed cluster
    dc1hana1: Successfully destroyed cluster
    dc2hana3: Successfully destroyed cluster
    dc1hana4: Successfully destroyed cluster
    Requesting remove 'pcsd settings' from 'dc1hana1', 'dc1hana2', 'dc1hana3', 'dc1hana4', 'dc2hana1', 'dc2hana2', 'dc2hana3', 'dc2hana4'
    dc1hana1: successful removal of the file 'pcsd settings'
    dc1hana2: successful removal of the file 'pcsd settings'
    dc1hana3: successful removal of the file 'pcsd settings'
    dc1hana4: successful removal of the file 'pcsd settings'
    dc2hana1: successful removal of the file 'pcsd settings'
    dc2hana2: successful removal of the file 'pcsd settings'
    dc2hana3: successful removal of the file 'pcsd settings'
    dc2hana4: successful removal of the file 'pcsd settings'
    Sending 'corosync authkey', 'pacemaker authkey' to 'dc1hana1', 'dc1hana2', 'dc1hana3', 'dc1hana4', 'dc2hana1', 'dc2hana2', 'dc2hana3', 'dc2hana4'
    dc1hana1: successful distribution of the file 'corosync authkey'
    dc1hana1: successful distribution of the file 'pacemaker authkey'
    dc1hana2: successful distribution of the file 'corosync authkey'
    dc1hana2: successful distribution of the file 'pacemaker authkey'
    dc1hana3: successful distribution of the file 'corosync authkey'
    dc1hana3: successful distribution of the file 'pacemaker authkey'
    dc1hana4: successful distribution of the file 'corosync authkey'
    dc1hana4: successful distribution of the file 'pacemaker authkey'
    dc2hana1: successful distribution of the file 'corosync authkey'
    dc2hana1: successful distribution of the file 'pacemaker authkey'
    dc2hana2: successful distribution of the file 'corosync authkey'
    dc2hana2: successful distribution of the file 'pacemaker authkey'
    dc2hana3: successful distribution of the file 'corosync authkey'
    dc2hana3: successful distribution of the file 'pacemaker authkey'
    dc2hana4: successful distribution of the file 'corosync authkey'
    dc2hana4: successful distribution of the file 'pacemaker authkey'
    Sending 'corosync.conf' to 'dc1hana1', 'dc1hana2', 'dc1hana3', 'dc1hana4', 'dc2hana1', 'dc2hana2', 'dc2hana3', 'dc2hana4'
    dc1hana1: successful distribution of the file 'corosync.conf'
    dc1hana2: successful distribution of the file 'corosync.conf'
    dc1hana3: successful distribution of the file 'corosync.conf'
    dc1hana4: successful distribution of the file 'corosync.conf'
    dc2hana1: successful distribution of the file 'corosync.conf'
    dc2hana2: successful distribution of the file 'corosync.conf'
    dc2hana3: successful distribution of the file 'corosync.conf'
    dc2hana4: successful distribution of the file 'corosync.conf'
    Cluster has been successfully set up.
    Starting cluster on hosts: 'dc1hana1', 'dc1hana2', 'dc1hana3', 'dc1hana4', 'dc2hana1', 'dc2hana2', 'dc2hana3', 'dc2hana4'...
  7. Enable the cluster to start automatically on system start on all cluster nodes, which enables the corosync and pacemaker services. Skip this step if you prefer to manually control the start of the cluster after a node restarts. Run on one node:

    [root]# pcs cluster enable --all
    dc1hana1: Cluster Enabled
    dc1hana2: Cluster Enabled
    dc1hana3: Cluster Enabled
    dc1hana4: Cluster Enabled
    dc2hana1: Cluster Enabled
    dc2hana2: Cluster Enabled
    dc2hana3: Cluster Enabled
    dc2hana4: Cluster Enabled

Verification

  • Check the cluster status after the initial configuration. Verify that it shows all cluster nodes as Online and the status of all cluster daemons is active/enabled:

    [root]# pcs status --full
    Cluster name: hana-scaleout-cluster
    
    WARNINGS:
    No stonith devices and stonith-enabled is not false
    
    Status of pacemakerd: 'Pacemaker is running' (last updated 2025-12-10 13:47:29Z)
    Cluster Summary:
      * Stack: corosync
      * Current DC: dc1hana4 (4) (version 2.1.5-9.el9_2.5-a3f44794f94) - partition with quorum
      * Last updated: Wed Dec 10 13:47:30 2025
      * Last change:  Wed Dec 10 13:45:23 2025 by hacluster via crmd on dc1hana4
      * 8 nodes configured
      * 0 resource instances configured
    
    Node List:
      * Node dc1hana1 (1): online, feature set 3.16.2
      * Node dc1hana2 (2): online, feature set 3.16.2
      * Node dc1hana3 (3): online, feature set 3.16.2
      * Node dc1hana4 (4): online, feature set 3.16.2
      * Node dc2hana1 (5): online, feature set 3.16.2
      * Node dc2hana2 (6): online, feature set 3.16.2
      * Node dc2hana3 (7): online, feature set 3.16.2
      * Node dc2hana4 (8): online, feature set 3.16.2
    
    Full List of Resources:
      * No resources
    
    Migration Summary:
    
    Tickets:
    
    PCSD Status:
      dc1hana1: Online
      dc1hana2: Online
      dc1hana3: Online
      dc1hana4: Online
      dc2hana1: Online
      dc2hana2: Online
      dc2hana3: Online
      dc2hana4: Online
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

Next steps

5.2. Configuring general cluster properties

You must adjust cluster resource defaults to avoid unnecessary failovers of the resources.

Procedure

  • Run the following command on one cluster node to update the default values of the resource-stickiness and migration-threshold parameters:

    [root]# pcs resource defaults update \
    resource-stickiness=1000 \
    migration-threshold=5000
    • resource-stickiness=1000 encourages the resource to stay running where it is. This prevents the cluster from moving the resources based on light internal health indicators.
    • migration-threshold=5000 enables the resource to be restarted on a node after 5000 failures. After exceeding this limit, the resource is blocked on the node until the failure has been cleared. This allows resource recovery after a few failures until an administrator can investigate the cause of the repeated failure and reset the counter.

Verification

  • Check that the resource defaults are set:

    [root]# pcs resource defaults
    Meta Attrs: build-resource-defaults
      migration-threshold=5000
      resource-stickiness=1000

Systemd integration is the default behavior of SAP HANA installations on RHEL 9 for SAP HANA 2.0 SPS07 revision 70 and newer. In HA environments you must apply additional modifications to integrate the different systemd services that are involved in the cluster setup.

Configure the pacemaker systemd service to manage the HANA instance systemd service in the correct order on all cluster nodes running HANA instances..

Prerequisites

  • You have installed the HANA instances with systemd integration and have checked the systemd integration on all HANA nodes, for example:

    [root]# systemctl list-units --all SAP*
      UNIT              LOAD      ACTIVE   SUB     DESCRIPTION
      SAPRH1_02.service loaded    active   running SAP Instance SAPRH1_02
      SAP.slice         loaded    active   active  SAP Slice
    ...

Procedure

  1. Create the directory /etc/systemd/system/pacemaker.service.d/ for the pacemaker service drop-in file:

    [root]# mkdir /etc/systemd/system/pacemaker.service.d/
  2. Create the systemd drop-in file for the pacemaker service with the following content:

    [root]# cat << EOF > /etc/systemd/system/pacemaker.service.d/00-pacemaker.conf
    [Unit]
    Description=Pacemaker needs the SAP HANA instance service
    Wants=SAP<SID>_<instance>.service
    After=SAP<SID>_<instance>.service
    EOF
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
  3. Reload the systemctl daemon to activate the drop-in file:

    [root]# systemctl daemon-reload
  4. Repeat steps 1-3 on the other HANA cluster nodes.

Verification

  1. Check the systemd service of your HANA instance and verify that it is loaded:

    [root]# systemctl status SAPRH1_02.service
    ● SAPRH1_02.service - SAP Instance SAPRH1_02
         Loaded: loaded (/etc/systemd/system/SAPRH1_02.service; disabled; preset: disabled)
         Active: active (running) since xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
       Main PID: 5825 (sapstartsrv)
          Tasks: 841
         Memory: 88.6G
            CPU: 4h 50min 2.033s
         CGroup: /SAP.slice/SAPRH1_02.service
                 ├─ 5825 /usr/sap/RH1/HDB02/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
                 ├─ 5986 sapstart pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
                 ├─ 5993 /usr/sap/RH1/HDB02/dc1hana1/trace/hdb.sapRH1_HDB02 -d -nw -f /usr/sap/RH1/HDB02/dc1hana1/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_dc1hana1
    ...
  2. Verify that the SAP HANA instance service is known to the pacemaker service now:

    [root]# systemctl show pacemaker.service | grep -E 'Wants=|After=|SAP.{6}.service'
    Wants=SAPRH1_02.service resource-agents-deps.target dbus-broker.service
    After=... SAPRH1_02.service …

    Make sure that the SAP<SID>_<instance>.service is listed in the After= and Wants= lists.

5.4. Installing the SAP HANA HA components

The resource-agents-sap-hana-scaleout RPM package in the Red Hat Enterprise Linux 9 for <arch> - SAP Solutions (RPMs) repository provides resource agents and other SAP HANA specific components for setting up a HA cluster for managing HANA system replication setup.

Procedure

  • Install the resource-agents-sap-hana-scaleout package on all cluster nodes:

    [root]# dnf install resource-agents-sap-hana-scaleout

Verification

  • Check on all nodes that the package is installed, for example:

    [root]# rpm -q resource-agents-sap-hana-scaleout
    resource-agents-sap-hana-scaleout-0.185.3-0.el9_2.noarch

When you configure the HANA instance in a HA cluster setup with SAP HANA 2.0 SPS0 or later, you must enable and test the SAP HANA srConnectionChanged() hook method before proceeding with the cluster setup.

Prerequisites

  • You have installed the resource-agents-sap-hana-scaleout package.
  • Your HANA instance is not yet managed by the cluster. Otherwise, use the maintenance procedure Performing maintenance on the SAP HANA instances to make sure that the cluster does not interfere during the configuration of the hook scripts.

Procedure

  1. Stop the HANA instances on all nodes. Run this as the <sid>adm user on one HANA instance per site:

    rh1adm$ sapcontrol -nr ${TINSTANCE} -function StopSystem HDB
  2. Verify as <sid>adm on all sites that the HANA instances are stopped completely and their status is GRAY in the instance list. Run this on one host on each site:

    rh1adm$ sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList
    GetSystemInstanceList
    OK
    hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
    dc1hana2, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GRAY
    dc1hana3, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GRAY
    dc1hana1, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GRAY
    dc1hana4, 2, 50213, 50214, 0.3, HDB|HDB_STANDBY, GRAY
  3. Change to the HANA configuration directory, as the <sid>adm user, using the command alias cdcoc, which is built into the <sid>adm user shell. This automatically changes into the /hana/shared/<SID>/global/hdb/custom/config/ path:

    rh1adm$ cdcoc
  4. Update the global.ini file of the SAP HANA site to configure the SAPHanaSR hook. Edit the configuration file on one node of each HANA site and add the following configuration:

    [ha_dr_provider_SAPHanaSR]
    provider = SAPHanaSR
    path = /usr/share/SAPHanaSR-ScaleOut
    execution_order = 1
    
    [trace]
    ha_dr_saphanasr = info

    Set execution_order to 1 to ensure that the SAPHanaSR hook is always executed with the highest priority.

    Due to the shared /hana/shared filesystem between the nodes of each HANA site, you only adjust the configuration once per site. Do not try to edit the same file on the shared filesystem simultaneously on more than one node of the same site.

  5. Optional: When you also want to configure the optional ChkSrv hook for taking action on an hdbindexserver failure, you can add the changes to the global.ini at the same time, see step 1 in Configuring the ChkSrv HA/DR provider for the srServiceStateChanged() hook method:

    [ha_dr_provider_SAPHanaSR]
    provider = SAPHanaSR
    path = /usr/share/SAPHanaSR-ScaleOut
    execution_order = 1
    
    [ha_dr_provider_chksrv]
    provider = ChkSrv
    path = /usr/share/SAPHanaSR-ScaleOut
    execution_order = 2
    action_on_lost = stop
    
    [trace]
    ha_dr_saphanasr = info
    ha_dr_chksrv = info
  6. Create the file /etc/sudoers.d/20-saphana, as the root user, on each cluster node with the following content. These command privileges allow the <sid>adm user to update certain cluster node attributes as part of the SAPHanaSR hook execution:

    [root]# visudo -f /etc/sudoers.d/20-saphana
    Defaults:<sid>adm !requiretty
    <sid>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_*
  7. Start the HANA instances on all cluster nodes manually without starting the HA cluster. Run as <sid>adm on one HANA instance per site:

    rh1adm$ sapcontrol -nr ${TINSTANCE} -function StartSystem HDB

Verification

  1. Change to the SAP HANA directory, as the <sid>adm user, where trace log files are stored. Use the command alias cdtrace, which is built into the <sid>adm user shell:

    rh1adm$ cdtrace
  2. Check the HANA nameserver process logs for the HA/DR provider loading message:

    rh1adm$ grep -he "loading HA/DR Provider.*" nameserver_*
    1. If only the SAPHanaSR provider is configured:

      loading HA/DR Provider 'SAPHanaSR' from /usr/share/SAPHanaSR-ScaleOut
    2. If the optional ChkSrv provider is also implemented:

      loading HA/DR Provider 'ChkSrv' from /usr/share/SAPHanaSR-ScaleOut
      loading HA/DR Provider 'SAPHanaSR' from /usr/share/SAPHanaSR-ScaleOut
  3. Verify as user root in the system secure log that the sudo command executed successfully. If the sudoers file is not correct, an error is logged when the sudo command is executed. Check this on the primary node on which the HANA master nameserver is running, for example, dc1hana1:

    [root]# grep -e 'sudo.*crm_attribute.*' /var/log/secure
    sudo[17141]:  rh1adm : PWD=/hana/shared/RH1/HDB02/dc1hana1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_gsh -v 1.0 -l reboot
    sudo[17160]:  rh1adm : PWD=/hana/shared/RH1/HDB02/dc1hana1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR
    …
    sudo[17584]:  rh1adm : PWD=/hana/shared/RH1/HDB02/dc1hana1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v SOK -t crm_config -s SAPHanaSR

    After the HANA instance starts on both nodes, you can usually see several srHook attribute updates. At first it is setting SFAIL, because immediately after the primary starts, it is not yet in sync with the secondary, which is still synchronizing at this time.

    The last update to SOK is triggered by the HANA event after the system replication status finally changes to fully in sync.

  4. Repeat the verification steps 1-2 on the second site, if not already done at the same time. The sudo log messages of step 3 are only visible on the primary instance coordinator nameserver node, on which the system replication events are logged.
  5. Check the cluster attributes on any node and verify that the value for the hana_<sid>_glob_srHook attribute is updated as expected:

    [root]# cibadmin --query | grep -e 'SAPHanaSR.*srHook'
     <nvpair id="SAPHanaSR-hana_rh1_glob_srHook" name="hana_rh1_glob_srHook" value="SOK"/>
    • SOK is set when the HANA system replication is in ACTIVE state, which means established and fully in sync.
    • SFAIL is set when the system replication is in any other state.

You can configure the hook ChkSrv if you want the HANA instance to be stopped or killed for faster recovery after an indexserver process has failed. This configuration is optional.

Prerequisites

Procedure

  1. Change to the HANA configuration directory as the <sid>adm user. Use the command alias cdcoc, which is built into the <sid>adm user shell. This automatically changes into the /hana/shared/<SID>/global/hdb/custom/config/ path:

    rh1adm$ cdcoc
  2. Update the global.ini file of the HANA site to configure the hook script. Edit the configuration file one node of each HANA site and add the following content in addition to the SAPHanaSR provider definition:

    [ha_dr_provider_chksrv]
    provider = ChkSrv
    path = /usr/share/SAPHanaSR-ScaleOut
    execution_order = 2
    action_on_lost = stop
    
    [trace]
    ha_dr_saphanasr = info
    ha_dr_chksrv = info

    Due to the shared /hana/shared filesystem between the nodes of each HANA site, you only adjust the configuration once per site. Do not try to edit the same file on the shared filesystem simultaneously on more than one node of the same site.

  3. Optional: Activate the ChkSrv provider while HANA is running by reloading the HA/DR providers. Skip this step when configuring the hook script while the instance is down, the HA/DR provider is loaded automatically at the next instance start.

    rh1adm$ hdbnsutil -reloadHADRProviders

Verification

  1. Change to the SAP HANA directory, as the <sid>adm user, where trace log files are stored. Use the command alias cdtrace, which is built into the <sid>adm user shell:

    rh1adm$ cdtrace
  2. Check that the changes are loaded:

    rh1adm$ grep -e "loading HA/DR Provider.*ChkSrv.*" nameserver_*
    loading HA/DR Provider 'ChkSrv' from /usr/share/SAPHanaSR-ScaleOut
  3. Check that the dedicated trace log file is created and the provider loaded with the correct configuration parameters:

    rh1adm$ cat nameserver_chksrv.trc
    init called
    ChkSrv.init() version 0.7.8, parameter info: action_on_lost=stop stop_timeout=20 kill_signal=9
    …

5.7. Creating the HANA cluster resources

You must configure the SAPHanaTopology and SAPHanaController resources so that the cluster can collect the status of the HANA landscape, monitor the instance health and take action to manage the instance when required.

Prerequisites

  • You have installed the cluster and you have configured all HANA nodes in the cluster.
  • You have configured the HANA system replication between your HANA sites.
  • All HANA instances are running and the system replication is healthy.

Procedure

  1. Create the SAPHanaTopology resource as a clone resource, which means it runs on all cluster nodes at the same time:

    [root]# pcs resource create rsc_SAPHanaTop_<SID>_HDB<instance> \
    ocf:heartbeat:SAPHanaTopology \
    SID=<SID> \
    InstanceNumber=<instance> \
    op start timeout=600 \
    op stop timeout=300 \
    op monitor interval=30 timeout=300 \
    clone cln_SAPHanaTop_<SID>_HDB<instance>
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
  2. Update the meta attributes of the new SAPHanaTopology clone resource:

    [root]# pcs resource update cln_SAPHanaTop_<SID>_HDB<instance> \
    meta clone-node-max=1 interleave=true
  3. Create the SAPHanaController resource as a promotable clone resource. This means it runs on all cluster nodes at the same time, but on one node it functions as the active, or primary, resource:

    [root]# pcs resource create rsc_SAPHanaCon_<SID>_HDB<instance> \
    ocf:heartbeat:SAPHanaController \
    SID=<SID> \
    InstanceNumber=<instance> \
    PREFER_SITE_TAKEOVER=true \
    DUPLICATE_PRIMARY_TIMEOUT=7200 \
    AUTOMATED_REGISTER=false \
    op stop timeout=3600 \
    op monitor interval=59 role=Promoted timeout=700 \
    op monitor interval=61 role=Unpromoted timeout=700 \
    meta priority=100 \
    promotable cln_SAPHanaCon_<SID>_HDB<instance>

    We recommend that you create the resource with AUTOMATED_REGISTER=false and then verify the correct behavior and data consistency through tests to complete the setup. For more information, see Testing the setup. You can enable this already at creation time by setting the parameter to true.

    See SAPHanaController resource parameters for more details.

  4. Update the meta attributes of the new SAPHanaController clone resource:

    [root]# pcs resource update cln_SAPHanaCon_<SID>_HDB<instance> \
    meta clone-node-max=1 interleave=true
  5. You must start the SAPHanaTopology resource before the SAPHanaController resource, because it collects HANA landscape information, which the SAPHanaController resource requires to start correctly. Create the cluster constraint that enforces the correct start order of the two resources:

    [root]# pcs constraint order cln_SAPHanaTop_<SID>_HDB<instance> \
    then cln_SAPHanaCon_<SID>_HDB<instance> symmetrical=false

    Setting symmetrical=false indicates that the constraint only influences the start order of the resources, but it does not apply to the stop order.

Verification

  1. Review the SAPHanaTopology resource clone. For operations that you do not define at resource creation, pcs automatically applies default values. Example resource configuration:

    [root]# pcs resource config cln_SAPHanaTop_RH1_HDB02
    Clone: cln_SAPHanaTop_RH1_HDB02
      Meta Attributes: cln_SAPHanaTop_RH1_HDB02-meta_attributes
        clone-node-max=1
        interleave=true
      Resource: rsc_SAPHanaTop_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaTopology)
        Attributes: rsc_SAPHanaTop_RH1_HDB02-instance_attributes
          InstanceNumber=02
          SID=RH1
        Operations:
          methods: rsc_SAPHanaTop_RH1_HDB02-methods-interval-0s
            interval=0s
            timeout=5
          monitor: rsc_SAPHanaTop_RH1_HDB02-monitor-interval-30
            interval=30
            timeout=300
          reload: rsc_SAPHanaTop_RH1_HDB02-reload-interval-0s
            interval=0s
            timeout=5
          start: rsc_SAPHanaTop_RH1_HDB02-start-interval-0s
            interval=0s
            timeout=600
          stop: rsc_SAPHanaTop_RH1_HDB02-stop-interval-0s
            interval=0s
            timeout=300
  2. Review the SAPHanaController resource clone. Example resource configuration:

    [root]# pcs resource config cln_SAPHanaCon_RH1_HDB02
    Clone: cln_SAPHanaCon_RH1_HDB02
      Meta Attributes: cln_SAPHanaCon_RH1_HDB02-meta_attributes
        clone-node-max=1
        interleave=true
        promotable=true
      Resource: rsc_SAPHanaCon_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaController)
        Attributes: rsc_SAPHanaCon_RH1_HDB02-instance_attributes
          AUTOMATED_REGISTER=false
          DUPLICATE_PRIMARY_TIMEOUT=7200
          InstanceNumber=02
          PREFER_SITE_TAKEOVER=true
          SID=RH1
        Meta Attributes: rsc_SAPHanaCon_RH1_HDB02-meta_attributes
          priority=100
        Operations:
          demote: rsc_SAPHanaCon_RH1_HDB02-demote-interval-0s
            interval=0s
            timeout=320
          methods: rsc_SAPHanaCon_RH1_HDB02-methods-interval-0s
            interval=0s
            timeout=5
          monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-59
            interval=59
            timeout=700
            role=Promoted
          monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-61
            interval=61
            timeout=700
            role=Unpromoted
          promote: rsc_SAPHanaCon_RH1_HDB02-promote-interval-0s
            interval=0s
            timeout=3600
          reload: rsc_SAPHanaCon_RH1_HDB02-reload-interval-0s
            interval=0s
            timeout=5
          start: rsc_SAPHanaCon_RH1_HDB02-start-interval-0s
            interval=0s
            timeout=3600
          stop: rsc_SAPHanaCon_RH1_HDB02-stop-interval-0s
            interval=0s
            timeout=3600
  3. Check that the start order constraint is in place:

    [root]# pcs constraint order
    Order Constraints:
      start resource 'cln_SAPHanaTop_RH1_HDB02' then start resource 'cln_SAPHanaCon_RH1_HDB02'
        symmetrical=0
  4. Check the cluster status. Use --full to include node attributes, which are updated by the HANA resources:

    [root]# pcs status --full
    ...
    Full List of Resources:
      * rsc_fence       (stonith:<fence agent>):     Started dc1hana1
      * Clone Set: cln_SAPHanaTop_RH1_HDB02 [rsc_SAPHanaTop_RH1_HDB02]:
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc1hana4
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc2hana1
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc2hana2
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc2hana3
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc2hana4
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc1hana1
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc1hana2
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started dc1hana3
      * Clone Set: cln_SAPHanaCon_RH1_HDB02 [rsc_SAPHanaCon_RH1_HDB02] (promotable):
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc1hana4
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc2hana1
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc2hana2
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc2hana3
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc2hana4
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Promoted dc1hana1
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc1hana2
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted dc1hana3
    
    Node Attributes:
      * Node: dc1hana1 (1):
        * hana_rh1_clone_state              : PROMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master1:master:worker:master
        * hana_rh1_site                     : DC1
        * master-rsc_SAPHanaCon_RH1_HDB02   : 150
      * Node: dc1hana2 (2):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master2:slave:worker:slave
        * hana_rh1_site                     : DC1
        * master-rsc_SAPHanaCon_RH1_HDB02   : 140
      * Node: dc1hana3 (3):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : slave:slave:worker:slave
        * hana_rh1_site                     : DC1
        * master-rsc_SAPHanaCon_RH1_HDB02   : -10000
      * Node: dc1hana4 (4):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master3:slave:standby:standby
        * hana_rh1_site                     : DC1
        * master-rsc_SAPHanaCon_RH1_HDB02   : 140
      * Node: dc2hana1 (5):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master1:master:worker:master
        * hana_rh1_site                     : DC2
        * master-rsc_SAPHanaCon_RH1_HDB02   : 100
      * Node: dc2hana2 (6):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master2:slave:worker:slave
        * hana_rh1_site                     : DC2
        * master-rsc_SAPHanaCon_RH1_HDB02   : 80
      * Node: dc2hana3 (7):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : slave:slave:worker:slave
        * hana_rh1_site                     : DC2
        * master-rsc_SAPHanaCon_RH1_HDB02   : -12200
      * Node: dc2hana4 (8):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_gra                      : 2.0
        * hana_rh1_gsh                      : 1.0
        * hana_rh1_roles                    : master3:slave:standby:standby
        * hana_rh1_site                     : DC2
        * master-rsc_SAPHanaCon_RH1_HDB02   : 80
    
    ...
Note

The timeouts shown for the resource operations are only recommended defaults and can be adjusted depending on your SAP HANA environment. For example, large SAP HANA databases can take longer to start up and therefore you might have to increase the start timeout.

Warning

Setting AUTOMATED_REGISTER to true can potentially increase the risk of data loss or corruption. If the HA cluster triggers a takeover when the data on the secondary HANA instance is not fully in sync, the automatic registration of the old primary HANA instance as the new secondary HANA instance results a data loss on this instance and any data that was not synced before the takeover occurred is lost as well.

For more information, see the article on the SAP Technology Blog for Members: Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 2: Failure of Both Nodes.

5.8. Creating the virtual IP resource

You must configure a virtual IP (VIP) resource for SAP clients to access the primary HANA instance independently from the cluster node it is currently running on. Configure the VIP resource to automatically move to the node where the primary instance is running.

The resource agent you need for the VIP resource depends on the platform you use. We are using the IPaddr2 resource agent to demonstrate the setup.

Prerequisites

  • You have reserved a virtual IP for the service.

Procedure

  1. Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the primary virtual IP, for example, using the IPaddr2 agent:

    [root]# pcs resource create rsc_vip_<SID>_HDB<instance>_primary \
    ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
    • Replace <address>, <netmask> and <device> with the details of your primary virtual IP address.
  2. Create a cluster constraint that places the VIP resource with the SAPHanaController resource on the HANA primary node:

    [root]# pcs constraint colocation add rsc_vip_<SID>_HDB<instance>_primary \
    with promoted cln_SAPHanaCon_<SID>_HDB<instance> 2000

    The constraint applies a score of 2000 instead of the default INFINITY. This softens the resource dependency and allows the virtual IP resource to stay active in the case when there is no promoted SAPHanaController resource. This way it is still possible to use tools like the SAP Management Console (MMC) or SAP Landscape Management (LaMa) that can reach this IP address to query the status information of the HANA instance.

Verification

  1. Check the resource configuration of the virtual IP resource, for example::

    [root]# pcs resource config rsc_vip_RH1_HDB02_primary
    Resource: rsc_vip_RH1_HDB02_primary (class=ocf provider=heartbeat type=IPaddr2)
      Attributes: rsc_vip_RH1_HDB02_primary-instance_attributes
        cidr_netmask=32
        ip=192.168.1.100
        nic=eth0
      Operations:
        monitor: rsc_vip_RH1_HDB02_primary-monitor-interval-10s
          interval=10s
          timeout=20s
        start: rsc_vip_RH1_HDB02_primary-start-interval-0s
          interval=0s
          timeout=20s
        stop: rsc_vip_RH1_HDB02_primary-stop-interval-0s
          interval=0s
          timeout=20s
  2. Check that the constraint is defined correctly:

    [root]# pcs constraint colocation
    Colocation Constraints:
      rsc_vip_RH1_HDB02_primary with cln_SAPHanaCon_RH1_HDB02 (score:2000) (rsc-role:Started) (with-rsc-role:Promoted)
  3. Check that the resource is running on the promoted primary node, for example, dc1hana1:

    [root]# pcs status resources rsc_vip_RH1_HDB02_primary
      * rsc_vip_RH1_HDB02_primary   (ocf:heartbeat:IPaddr2):         Started dc1hana1

To support the Active/Active (read-enabled) secondary setup you must add a second virtual IP to provide client access to the secondary SAP HANA site.

Configure additional rules to ensure that the second virtual IP is always associated with a healthy SAP HANA site, maximizing client access and availability.

  • Normal operation

    When both primary and secondary SAP HANA sites are active and the system replication is in sync, the second virtual IP is assigned to the secondary instance.

  • Secondary unavailable or out of sync

    If the secondary site is down or the system replication is not in sync, the virtual IP moves to the primary instance. It automatically returns to the secondary instance as soon as the system replication is back in sync.

  • Failover scenario

    In case the cluster triggers a takeover, the virtual IP remains on the same instance. After the former primary instance takes over the secondary role and the system replication is in sync again, this VIP shifts there accordingly.

Prerequisites

  • You have set operationMode=logreplay_readaccess when registering the secondary SAP HANA instance for system replication with the primary site.

Procedure

  1. Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the secondary virtual IP, for example, using the IPaddr2 agent:

    [root]# pcs resource create rsc_vip_<SID>_HDB<site>_readonly \
    ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>
    • Replace <SID> with your HANA SID.
    • Replace <site> with your HANA site number.
    • Replace <address>, <netmask> and <device> with the details of your read-only secondary virtual IP address.
  2. Create a location constraint rule to ensure that the secondary virtual IP is assigned to the secondary site during normal operations:

    [root]# pcs constraint location rsc_vip_<SID>_HDB<site>_readonly \
    rule score=INFINITY master-rsc_SAPHanaCon_<SID>_HDB<site> eq 100 \
    and hana_<sid>_clone_state eq DEMOTED
    • Replace <SID> with your HANA SID.
    • Replace <sid> with the lower-case HANA SID.
    • Replace <site> with your HANA site number.
  3. Create a location constraint rule to ensure that the secondary virtual IP runs on the primary site as an alternative whenever necessary:

    [root]# pcs constraint location rsc_vip_<SID>_HDB<site>_readonly \
    rule score=2000 master-rsc_SAPHanaCon_<SID>_HDB<site> eq 150 \
    and hana_<sid>_clone_state eq PROMOTED

Verification

  1. Check the resource configuration of the secondary virtual IP resource, for example:

    [root]# pcs resource config rsc_vip_RH1_HDB02_readonly
    Resource: rsc_vip_RH1_HDB02_readonly (class=ocf provider=heartbeat type=IPaddr2)
      Attributes: rsc_vip_RH1_HDB02_readonly-instance_attributes
        cidr_netmask=32
        ip=192.168.1.200
        nic=eth0
      Operations:
        monitor: rsc_vip_RH1_HDB02_readonly-monitor-interval-10s
          interval=10s
          timeout=20s
        start: rsc_vip_RH1_HDB02_readonly-start-interval-0s
          interval=0s
          timeout=20s
        stop: rsc_vip_RH1_HDB02_readonly-stop-interval-0s
          interval=0s
          timeout=20s
  2. Check that the constraints are part of the cluster configuration:

    [root]#  pcs constraint location
    Location Constraints:
      Resource: rsc_vip_RH1_HDB02_readonly
        Constraint: location-rsc_vip_RH1_HDB02_readonly
          Rule: boolean-op=and score=INFINITY
            Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 100
            Expression: hana_rh1_clone_state eq DEMOTED
        Constraint: location-rsc_vip_RH1_HDB02_readonly-1
          Rule: boolean-op=and score=2000
            Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 150
            Expression: hana_rh1_clone_state eq PROMOTED
  3. Check that the resource is running on the main secondary node, for example, dc2hana1:

    [root]# pcs status resources rsc_vip_RH1_HDB02_readonly
      * rsc_vip_RH1_HDB02_readonly  (ocf:heartbeat:IPaddr2):         Started dc2hana1
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top