Chapter 5. Configuring the Pacemaker cluster


5.1. Deploying the basic cluster configuration

The following basic cluster setup covers the minimum steps to get started with the Pacemaker cluster setup for managing SAP HANA system replication.

For more information on settings and options for complex configurations, refer to the documentation for RHEL HA Add-On, for example, Create a high availability cluster with multiple links.

Prerequisites

  • You have set up the HANA system replication environment and verified that it is working correctly.
  • You have configured the RHEL High Availability repository on all systems that are going to be nodes of this cluster.
  • You have verified fencing and quorum requirements according to your planned environment. For more details see HA cluster requirements.

Procedure

  1. Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose which fence agents you want to install and execute the installation on all cluster nodes.

    1. Either install the cluster packages and all fence agents:

      [root]# dnf install pcs pacemaker fence-agents-all
    2. Or install the cluster packages and only a specific fence agent, depending on your environment:

      [root]# dnf install pcs pacemaker fence-agents-<model>
  2. Start and enable the pcsd service on all cluster nodes:

    [root]# systemctl enable --now pcsd.service
  3. Optional: If you are running the firewalld service, enable the ports that are required by the Red Hat High Availability Add-On. Run this on all cluster nodes:

    [root]# firewall-cmd --add-service=high-availability
    [root]# firewall-cmd --runtime-to-permanent
  4. Set a password for the user hacluster. Repeat the command on each node using the same password:

    [root]# passwd hacluster
  5. Authenticate the user hacluster for each node in the cluster. Run this on the first node:

    [root]# pcs host auth <node1> <node2>
    Username: hacluster
    Password:
    <node1>: Authorized
    <node2>: Authorized
    • Enter the node names with or without FQDN, as defined in the /etc/hosts file.
    • Enter the hacluster user password in the prompt.
  6. Create the cluster with a name and provide the names of the cluster members, for example node1 and node2 with fully qualified host names. This propagates the cluster configuration on both nodes and starts the cluster. Run this command on the first node:

    [root]# pcs cluster setup <cluster_name> --start <node1> <node2>
    No addresses specified for host 'node1', using 'node1'
    No addresses specified for host 'node2', using 'node2'
    Destroying cluster on hosts: 'node1', 'node2'...
    node2: Successfully destroyed cluster
    node1: Successfully destroyed cluster
    Requesting remove 'pcsd settings' from 'node1', 'node2'
    node1: successful removal of the file 'pcsd settings'
    node2: successful removal of the file 'pcsd settings'
    Sending 'corosync authkey', 'pacemaker authkey' to 'node1', 'node2'
    node1: successful distribution of the file 'corosync authkey'
    node1: successful distribution of the file 'pacemaker authkey'
    node2: successful distribution of the file 'corosync authkey'
    node2: successful distribution of the file 'pacemaker authkey'
    Sending 'corosync.conf' to 'node1', 'node2'
    node1: successful distribution of the file 'corosync.conf'
    node2: successful distribution of the file 'corosync.conf'
    Cluster has been successfully set up.
    Starting cluster on hosts: 'node1', 'node2'...
  7. Enable the cluster to be started automatically on system start, which enables the corosync and pacemaker services. Skip this step if you prefer to manually control the start of the cluster after a node restarts. Run on one node:

    [root]# pcs cluster enable --all
    node1: Cluster Enabled
    node2: Cluster Enabled

Verification

  • Check the cluster status. Verify that the cluster daemon services are in the desired state:

    [root]# pcs status --full
    Cluster name: node1-node2-cluster
    
    WARNINGS:
    No stonith devices and stonith-enabled is not false
    
    Cluster Status:
     Cluster Summary:
       * Stack: corosync (Pacemaker is running)
       * Current DC: node1 (version ) - partition with quorum * Last updated: * on node1 * Last change: ** by hacluster via hacluster on node1
       * 2 nodes configured
       * 0 resource instances configured
    
    ...
    
    PCSD Status:
      node1: Online
      node2: Online
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

Next steps

5.2. Configuring general cluster properties

You must adjust cluster resource defaults to avoid unnecessary failovers of the resources.

Procedure

  • Update cluster resource defaults to avoid unnecessary failovers of the resources. Run the command on one cluster node to apply the change to the cluster configuration:

    [root]# pcs resource defaults update \
    resource-stickiness=1000 \
    migration-threshold=5000
    • resource-stickiness=1000 encourages the resource to stay running where it is. This prevents the cluster from moving the resources based on light internal health indicators.
    • migration-threshold=5000 enables the resource to be restarted on a node after 5000 failures. After exceeding this limit, the resource is blocked on the node until the failure has been cleared. This allows resource recovery after a few failures until an administrator can investigate the cause of the repeated failure and reset the counter.

Verification

  • Check that the resource defaults are set:

    [root]# pcs resource defaults
    Meta Attrs: build-resource-defaults
      migration-threshold=5000
      resource-stickiness=1000

Systemd integration is the default behavior of SAP HANA installations on RHEL 9 for SAP HANA 2.0 SPS07 revision 70 and newer. In HA environments you must apply additional modifications to integrate the different systemd services that are involved in the cluster setup.

Configure the pacemaker systemd service to manage the HANA instance systemd service in the correct order.

Prerequisites

  • You have installed the HANA instance with systemd integration and have checked on all nodes, for example:

    [root]# systemctl list-units --all SAP*
      UNIT              LOAD      ACTIVE   SUB     DESCRIPTION
      SAPRH1_02.service loaded    active   running SAP Instance SAPRH1_02
      SAP.slice         loaded    active   active  SAP Slice
    ...

Procedure

  1. Create the directory /etc/systemd/system/pacemaker.service.d/ for the pacemaker service drop-in file:

    [root]# mkdir /etc/systemd/system/pacemaker.service.d/
  2. Create the systemd drop-in file for the pacemaker service with the following content:

    [root]# cat << EOF > /etc/systemd/system/pacemaker.service.d/00-pacemaker.conf
    [Unit]
    Description=Pacemaker needs the SAP HANA instance service
    Wants=SAP<SID>_<instance>.service
    After=SAP<SID>_<instance>.service
    EOF
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
  3. Reload the systemctl daemon to activate the drop-in file:

    [root]# systemctl daemon-reload
  4. Repeat steps 1-3 on the other cluster nodes.

Verification

  1. Check the systemd service of your HANA instance and verify that it is loaded:

    [root]# systemctl status SAPRH1_02.service
    ● SAPRH1_02.service - SAP Instance SAPRH1_02
         Loaded: loaded (/etc/systemd/system/SAPRH1_02.service; disabled; preset: disabled)
         Active: active (running) since xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
       Main PID: 5825 (sapstartsrv)
          Tasks: 841
         Memory: 88.6G
            CPU: 4h 50min 2.033s
         CGroup: /SAP.slice/SAPRH1_02.service
                 ├─ 5825 /usr/sap/RH1/HDB02/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1
                 ├─ 5986 sapstart pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1
                 ├─ 5993 /usr/sap/RH1/HDB02/node1/trace/hdb.sapRH1_HDB02 -d -nw -f /usr/sap/RH1/HDB02/node1/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1
    ...
  2. Verify that the SAP HANA instance service is known to the pacemaker service now:

    [root]# systemctl show pacemaker.service | grep SAP
    Wants=SAPRH1_02.service resource-agents-deps.target dbus-broker.service
    After=... SAPRH1_02.service rsyslog.service

    Make sure that the SAP<SID>_<instance>.service is listed in the After= and Wants= lists.

5.4. Installing the SAP HANA HA components

The sap-hana-ha RPM package in the Red Hat Enterprise Linux 9 for <arch> - SAP Solutions (RPMs) repository provides resource agents and other SAP HANA specific components for setting up a HA cluster for managing HANA system replication setup.

Important

The package sap-hana-ha is only available since RHEL 9.4. If you configure a cluster for HANA HA on an older RHEL 9 release, follow the instructions in Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On instead.

Procedure

  • Install the sap-hana-ha package on all cluster nodes:

    [root]# dnf install sap-hana-ha

Verification

  • Check on all nodes that the package is installed:

    [root]# rpm -q sap-hana-ha
    sap-hana-ha-<version>.<release>.noarch

When you configure the HANA instance in a HA cluster setup with SAP HANA 2.0 SPS0 or later, you must enable and test the SAP HANA srConnectionChanged() hook method before proceeding with the cluster setup.

Prerequisites

  • You have installed the sap-hana-ha package.
  • Your HANA instance is not yet managed by the cluster. Otherwise, use the maintenance procedure Updating the SAP HANA instances to make sure that the cluster does not interfere during the configuration of the hook scripts.

Procedure

  1. Stop the HANA instances on all nodes as the <sid>adm user:

    rh1adm $ HDB stop
  2. Verify as <sid>adm on all nodes that the HANA instances are stopped completely:

    rh1adm $ sapcontrol -nr ${TINSTANCE} -function GetProcessList | column -s ',' -t
    ...
    name                  description   dispstatus   textstatus   starttime   elapsedtime   pid
    hdbdaemon             HDB Daemon    GRAY         Stopped                                5381
  3. Change to the HANA configuration directory, as the <sid>adm user, using the command alias cdcoc, which is built into the <sid>adm user shell. This automatically changes into the /hana/shared/<SID>/global/hdb/custom/config/ path:

    rh1adm $ cdcoc
  4. Update the global.ini file of the SAP HANA instance to configure the HanaSR hook. Edit the configuration file on all HANA instance nodes and add the following configuration:

    [ha_dr_provider_hanasr]
    provider = HanaSR
    path = /usr/share/sap-hana-ha/
    execution_order = 1
    
    [trace]
    ha_dr_hanasr = info

    Set execution_order to 1 to ensure that the HanaSR hook is always executed with the highest priority.

  5. Optional: When you also want to configure the optional ChkSrv hook for taking action on an hdbindexserver failure, you can add the changes to the global.ini at the same time, see step 1 in Configuring the ChkSrv HA/DR provider for the srServiceStateChanged() hook method:

    [ha_dr_provider_hanasr]
    provider = HanaSR
    path = /usr/share/sap-hana-ha/
    execution_order = 1
    
    [ha_dr_provider_chksrv]
    provider = ChkSrv
    path = /usr/share/sap-hana-ha/
    execution_order = 2
    action_on_lost = stop
    
    [trace]
    ha_dr_hanasr = info
    ha_dr_chksrv = info
  6. Create the file /etc/sudoers.d/20-saphana, as the root user, on each cluster node with the following content. These command privileges allow the <sid>adm user to update certain cluster node attributes as part of the HanaSR hook execution:

    [root]# visudo -f /etc/sudoers.d/20-saphana
    <sid>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_*
    Defaults:<sid>adm !requiretty
  7. Start the HANA instances on both cluster nodes manually without starting the HA cluster. Run as <sid>adm:

    rh1adm $ HDB start

Verification

  1. Change to the SAP HANA directory, as the <sid>adm user, where trace log files are stored. Use the command alias cdtrace, which is built into the <sid>adm user shell:

    rh1adm $ cdtrace
  2. Check the HANA nameserver service logs for the HA/DR provider loading message:

    1. If only the HanaSR provider is configured:

      rh1adm $ grep -he "loading HA/DR Provider.*" nameserver_*
      loading HA/DR Provider 'HanaSR' from /usr/share/sap-hana-ha/
    2. If the optional ChkSrv provider is also implemented:

      rh1adm $ grep -he "loading HA/DR Provider.*" nameserver_*
      loading HA/DR Provider 'ChkSrv' from /usr/share/sap-hana-ha/
      loading HA/DR Provider 'HanaSR' from /usr/share/sap-hana-ha/
  3. Verify as user root in the system secure log that the sudo command executed successfully. If the sudoers file is not correct, an error is logged when the sudo command is executed:

    [root]# grep -e 'sudo.*crm_attribute.*' /var/log/secure
     sudo[4267]:  rh1adm : PWD=/hana/shared/RH1/HDB02/node1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SFAIL -t crm_config -s SAPHanaSR
     sudo[4319]:  rh1adm : PWD=/hana/shared/RH1/HDB02/node1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SOK -t crm_config -s SAPHanaSR

    After the HANA instance starts on both nodes, you can usually see several srHook attribute updates. At first it is setting SFAIL, because immediately after the primary starts, it is not yet in sync with the secondary, which is still synchronizing at this time.

    The last update to SOK is triggered by the HANA event after the system replication status finally changes to fully in sync.

  4. Repeat the verification steps 1-2 on the second instance, if not already done at the same time. The sudo log messages of step 3 are only visible on the primary instance node where the system replication events are sent.
  5. Check the cluster attributes on any node and verify that the value for the hana_<sid>_site_srHook_<DC2> attribute is updated as expected:

    [root]# cibadmin --query | grep -e 'SAPHanaSR.*srHook'
            <nvpair id="SAPHanaSR-hana_rh1_site_srHook_DC2" name="hana_rh1_site_srHook_DC2" value="SOK"/>
    • SOK is set when the HANA system replication is in ACTIVE state, which means established and fully in sync.
    • SFAIL is set when the system replication is in any other state.

You can configure the hook ChkSrv if you want the HANA instance to be stopped or killed for faster recovery after an indexserver process has failed. This configuration is optional.

Prerequisites

Procedure

  1. Change to the HANA configuration directory as the <sid>adm user. Use the command alias cdcoc, which is built into the <sid>adm user shell. This automatically changes into the /hana/shared/<SID>/global/hdb/custom/config/ path:

    rh1adm $ cdcoc
  2. Update the global.ini file of the HANA instance to configure the hook script. Edit the configuration file on all HANA instance systems and add the following content in addition to the HanaSR provider definition:

    [ha_dr_provider_chksrv]
    provider = ChkSrv
    path = /usr/share/sap-hana-ha/
    execution_order = 2
    action_on_lost = stop
    
    [trace]
    ha_dr_hanasr = info
    ha_dr_chksrv = info
  3. Optional: If you want to use the fence option for action_on_lost, you must add the SAPHanaSR-hookHelper to the sudo configuration of the <sid>adm user:

    [root]# visudo -f /etc/sudoers.d/20-saphana
    …
    <sid>adm ALL=(ALL) NOPASSWD: /usr/bin/SAPHanaSR-hookHelper
  4. Optional: Activate the ChkSrv provider while HANA is running by reloading the HA/DR providers. Skip this step when configuring the hook script while the instance is down, the HA/DR provider is loaded automatically at the next instance start.

    rh1adm $ hdbnsutil -reloadHADRProviders

Verification

  1. Change to the SAP HANA directory, as the <sid>adm user, where trace log files are stored. Use the command alias cdtrace, which is built into the <sid>adm user shell:

    rh1adm $ cdtrace
  2. Check that the changes are loaded:

    rh1adm $ grep -e "loading HA/DR Provider.*ChkSrv.*" nameserver_*
    loading HA/DR Provider 'ChkSrv' from /usr/share/sap-hana-ha/
  3. Check that the dedicated trace log file is created and the provider loaded with the correct configuration parameters:

    rh1adm $ cat nameserver_chksrv.trc
    init called
    ChkSrv.init() version 1.001.1, parameter info: action_on_lost=stop stop_timeout=20 kill_signal=9

5.7. Creating the HANA cluster resources

You must configure both the SAPHanaTopology and SAPHanaController resources so that the cluster can collect the status of the HANA landscape, monitor the instance health and take action to manage the instance when required.

The SAPHanaFilesystem resource is optional, you can add it for improving the time to action in case the filesystem of the primary instance becomes unavailable.

Procedure

  1. Create the SAPHanaTopology resource as a clone resource, which means it runs on all cluster nodes at the same time:

    [root]# pcs resource create rsc_SAPHanaTop_<SID>_HDB<instance> \
    ocf:heartbeat:SAPHanaTopology \
    SID=<SID> \
    InstanceNumber=<instance> \
    op start timeout=600 \
    op stop timeout=300 \
    op monitor interval=30 timeout=300 \
    clone cln_SAPHanaTop_<SID>_HDB<instance> \
    meta clone-max=2 clone-node-max=1 interleave=true --future
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.

      Note

      Since RHEL 9.3, a deprecation warning is displayed when not using the meta keyword in the clone command and the attributes are automatically assigned to the base resource.

      In the future, the meta keyword for clone attributes will be required for assigning attributes to the clone resource. Until then, add the --future parameter to already apply this behavior now.

      See also New pcs parsing requires meta keyword when specifying clone meta attributes.

  2. Create the SAPHanaController resource as a promotable clone resource. This means it runs on all cluster nodes at the same time, but on one node it functions as the active, or primary, instance:

    [root]# pcs resource create rsc_SAPHanaCon_<SID>_HDB<instance> \
    ocf:heartbeat:SAPHanaController \
    SID=<SID> \
    InstanceNumber=<instance> \
    PREFER_SITE_TAKEOVER=true \
    DUPLICATE_PRIMARY_TIMEOUT=7200 \
    AUTOMATED_REGISTER=false \
    op stop timeout=3600 \
    op monitor interval=59 role=Promoted timeout=700 \
    op monitor interval=61 role=Unpromoted timeout=700 \
    meta priority=100 \
    promotable cln_SAPHanaCon_<SID>_HDB<instance> \
    meta clone-max=2 clone-node-max=1 interleave=true --future

    We recommend creating the resource with AUTOMATED_REGISTER=false and then verify the correct behavior and data consistency through tests to complete the setup. For more information, see Testing the setup. You can enable this already at creation time by setting the parameter to true.

    See SAPHanaController resource parameters for more details.

  3. Optional: If you want the cluster to fence the node when the SAPHanaController resource fails, then update the resource with the ON_FAIL_ACTION parameter and set it to fence:

    [root]# pcs resource update rsc_SAPHanaCon_<SID>_HDB<instance> ON_FAIL_ACTION=fence
  4. You must start SAPHanaTopology before SAPHanaController, because it collects HANA landscape information, which the SAPHanaController resource requires to start correctly. Create the cluster constraint that enforces the correct start order of the two resources:

    [root]# pcs constraint order cln_SAPHanaTop_<SID>_HDB<instance> \
    then cln_SAPHanaCon_<SID>_HDB<instance> symmetrical=false

    Setting symmetrical=false indicates that the constraint only influences the start order of the resources, but it does not apply to the stop order.

  5. Optional: Create the SAPHanaFilesystem resource as a clone resource:

    [root]# pcs resource create rsc_SAPHanaFil_<SID>_HDB<instance> \
    ocf:heartbeat:SAPHanaFilesystem \
    SID=<SID> \
    InstanceNumber=<instance> \
    ON_FAIL_ACTION="fence" \
    op start interval=0 timeout=10 \
    op stop interval=0 timeout=20 \
    op monitor interval=120 timeout=120 \
    clone cln_SAPHanaFil_<SID>_HDB<instance> \
    meta clone-node-max=1 interleave=true --future

    Instead of setting ON_FAIL_ACTION=fence you can set it to ignore. This can be useful for testing the functionality first. The resource writes information to the system logs, which can be examined for evaluating if the resource would take the wanted action when activated to use the fence action.

Verification

  1. Review the SAPHanaTopology resource clone. Example resource configuration:

    [root]# pcs resource config cln_SAPHanaTop_RH1_HDB02
    Clone: cln_SAPHanaTop_RH1_HDB02
      Meta Attributes: cln_SAPHanaTop_RH1_HDB02-meta_attributes
        clone-max=2
        clone-node-max=1
        interleave=true
      Resource: rsc_SAPHanaTop_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaTopology)
        Attributes: rsc_SAPHanaTop_RH1_HDB02-instance_attributes
          InstanceNumber=02
          SID=RH1
        Operations:
          methods: rsc_SAPHanaTop_RH1_HDB02-methods-interval-0s
            interval=0s timeout=5
          monitor: rsc_SAPHanaTop_RH1_HDB02-monitor-interval-30
            interval=30 timeout=300
          reload: rsc_SAPHanaTop_RH1_HDB02-reload-interval-0s
            interval=0s timeout=5
          start: rsc_SAPHanaTop_RH1_HDB02-start-interval-0s
            interval=0s timeout=600
          stop: rsc_SAPHanaTop_RH1_HDB02-stop-interval-0s
            interval=0s timeout=300
  2. Review the SAPHanaController resource clone. Example resource configuration:

    [root]# pcs resource config cln_SAPHanaCon_RH1_HDB02
    Clone: cln_SAPHanaCon_RH1_HDB02
      Meta Attributes: cln_SAPHanaCon_RH1_HDB02-meta_attributes
        clone-max=2
        clone-node-max=1
        interleave=true
        promotable=true
      Resource: rsc_SAPHanaCon_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaController)
        Attributes: rsc_SAPHanaCon_RH1_HDB02-instance_attributes
          AUTOMATED_REGISTER=false
          DUPLICATE_PRIMARY_TIMEOUT=7200
          InstanceNumber=02
          PREFER_SITE_TAKEOVER=true
          SID=RH1
        Meta Attributes: rsc_SAPHanaCon_RH1_HDB02-meta_attributes
          priority=100
        Operations:
          demote: rsc_SAPHanaCon_RH1_HDB02-demote-interval-0s
            interval=0s timeout=320
          methods: rsc_SAPHanaCon_RH1_HDB02-methods-interval-0s
            interval=0s timeout=5
          monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-59
            interval=59 timeout=700 role=Promoted
          monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-61
            interval=61 timeout=700 role=Unpromoted
          promote: rsc_SAPHanaCon_RH1_HDB02-promote-interval-0s
            interval=0s timeout=900
          reload: rsc_SAPHanaCon_RH1_HDB02-reload-interval-0s
            interval=0s timeout=5
          start: rsc_SAPHanaCon_RH1_HDB02-start-interval-0s
            interval=0s timeout=3600
          stop: rsc_SAPHanaCon_RH1_HDB02-stop-interval-0s
            interval=0s timeout=3600
  3. Check that the start order constraint is in place:

    [root]# pcs constraint order
    Order Constraints:
      start resource 'cln_SAPHanaTop_RH1_HDB02' then start resource 'cln_SAPHanaCon_RH1_HDB02'
        symmetrical=0
  4. Optional: Review the SAPHanaFilesystem resource clone if you created it. Example resource configuration:

    [root]# pcs resource config cln_SAPHanaFil_RH1_HDB02
    Clone: cln_SAPHanaFil_RH1_HDB02
      Meta Attributes: cln_SAPHanaFil_RH1_HDB02-meta_attributes
        clone-node-max=1
        interleave=true
      Resource: rsc_SAPHanaFil_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaFilesystem)
        Attributes: rsc_SAPHanaFil_RH1_HDB02-instance_attributes
          InstanceNumber=02
          ON_FAIL_ACTION=fence
          SID=RH1
        Operations:
          methods: rsc_SAPHanaFil_RH1_HDB02-methods-interval-0s
            interval=0s timeout=5
          monitor: rsc_SAPHanaFil_RH1_HDB02-monitor-interval-120
            interval=120 timeout=120
          reload: rsc_SAPHanaFil_RH1_HDB02-reload-interval-0s
            interval=0s timeout=5
          start: rsc_SAPHanaFil_RH1_HDB02-start-interval-0
            interval=0 timeout=10
          stop: rsc_SAPHanaFil_RH1_HDB02-stop-interval-0
            interval=0 timeout=20
  5. Check the cluster status. Use --full to include node attributes, which are updated by the HANA resources:

    [root]# pcs status --full
    ...
    
    Full List of Resources:
      * Clone Set: cln_SAPHanaTop_RH1_HDB02 [rsc_SAPHanaTop_RH1_HDB02]:
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started node1
        * rsc_SAPHanaTop_RH1_HDB02  (ocf:heartbeat:SAPHanaTopology):         Started node2
      * Clone Set: cln_SAPHanaCon_RH1_HDB02 [rsc_SAPHanaCon_RH1_HDB02] (promotable):
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Promoted node1
        * rsc_SAPHanaCon_RH1_HDB02  (ocf:heartbeat:SAPHanaController):       Unpromoted node2
      * Clone Set: cln_SAPHanaFil_RH1_HDB02 [rsc_SAPHanaFil_RH1_HDB02]:
        * rsc_SAPHanaFil_RH1_HDB02  (ocf:heartbeat:SAPHanaFilesystem):       Started node1
        * rsc_SAPHanaFil_RH1_HDB02  (ocf:heartbeat:SAPHanaFilesystem):       Started node2
    
    Node Attributes:
      * Node: node1 (1):
        * hana_rh1_clone_state              : PROMOTED
        * hana_rh1_roles                    : master1:master:worker:master
        * hana_rh1_site                     : DC1
        * hana_rh1_srah                     : -
        * hana_rh1_version                  : 2.00.078.00
        * hana_rh1_vhost                    : node1
        * master-rsc_SAPHanaCon_RH1_HDB02   : 150
      * Node: node2 (2):
        * hana_rh1_clone_state              : DEMOTED
        * hana_rh1_roles                    : master1:master:worker:master
        * hana_rh1_site                     : DC2
        * hana_rh1_srah                     : -
        * hana_rh1_version                  : 2.00.078.00
        * hana_rh1_vhost                    : node2
        * master-rsc_SAPHanaCon_RH1_HDB02   : 100
    
    ...
Note

The timeouts shown for the resource operations are only recommended defaults and can be adjusted depending on your SAP HANA environment. For example, large SAP HANA databases can take longer to start up and therefore you might have to increase the start timeout.

Warning

Setting AUTOMATED_REGISTER to true can potentially increase the risk of data loss or corruption. If the HA cluster triggers a takeover when the data on the secondary HANA instance is not fully in sync, the automatic registration of the old primary HANA instance as the new secondary HANA instance results a data loss on this instance and any data that was not synced before the takeover occurred is lost as well.

For more information, see the article on the SAP Technology Blog for Members: Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 2: Failure of Both Nodes.

5.8. Creating the virtual IP resource

You must configure a virtual IP (VIP) resource for SAP clients to access the primary HANA instance independently from the cluster node it is currently running on. Configure the VIP resource to automatically move to the node where the primary instance is running.

The resource agent needed for the VIP resource depends on the platform used. We are using the IPaddr2 resource agent to demonstrate the setup.

Prerequisites

  • You have reserved a virtual IP for the service.

Procedure

  1. Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the primary virtual IP, for example using the IPaddr2 agent:

    [root]# pcs resource create rsc_vip_<SID>_HDB<instance>_primary \
    ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
    • Replace <address>, <netmask> and <device> with the details of your primary virtual IP address.
  2. Create a cluster constraint that places the VIP resource with the SAPHanaController resource on the HANA primary node:

    [root]# pcs constraint colocation add rsc_vip_<SID>_HDB<instance>_primary \
    with promoted cln_SAPHanaCon_<SID>_HDB<instance> 2000

    The constraint applies a score of 2000 instead of the default INFINITY. This softens the resource dependency and allows the virtual IP resource to stay active in the case when there is no promoted SAPHanaController resource. This way it is still possible to use tools like the SAP Management Console (MMC) or SAP Landscape Management (LaMa) that can reach this IP address to query the status information of the HANA instance.

Verification

  1. Check the resource configuration of the virtual IP resource:

    [root]# pcs resource config rsc_vip_RH1_HDB02_primary
    Resource: rsc_vip_RH1_HDB02_primary (class=ocf provider=heartbeat type=IPaddr2)
      Attributes: rsc_vip_RH1_HDB02_primary-instance_attributes
        cidr_netmask=32
        ip=192.168.1.100
        nic=eth0
      Operations:
        monitor: rsc_vip_RH1_HDB02_primary-monitor-interval-10s
          interval=10s timeout=20s
        start: rsc_vip_RH1_HDB02_primary-start-interval-0s
          interval=0s timeout=20s
        stop: rsc_vip_RH1_HDB02_primary-stop-interval-0s
          interval=0s timeout=20s
  2. Check that the constraint is defined correctly:

    [root]# pcs constraint colocation
    Colocation Constraints:
      Started resource 'rsc_vip_RH1_HDB02_primary' with Promoted resource 'cln_SAPHanaCon_RH1_HDB02'
        score=2000

To support the Active/Active (read-enabled) secondary setup you must add a second virtual IP to provide client access to the secondary SAP HANA instance.

Configure additional rules to ensure that the second virtual IP is always associated with a healthy SAP HANA instance, maximizing client access and availability.

  • Normal operation

    When both primary and secondary SAP HANA instances are active and replication is in sync, the second virtual IP is assigned to the secondary node.

  • Secondary unavailable or out of sync

    If the secondary instance is down or replication is not in sync, the virtual IP moves to the primary node. It automatically returns to the secondary node as soon as system replication is back in sync.

  • Failover scenario

    In case the cluster triggers a takeover, the virtual IP remains on the same node. After the former primary node takes over the secondary role and replication is in sync again, this VIP shifts there accordingly.

Prerequisites

  • You have set operationMode=logreplay_readaccess when registering the secondary SAP HANA instance for system replication with the primary.

Procedure

  1. Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the secondary virtual IP, for example using the IPaddr2 agent:

    [root]# pcs resource create rsc_vip_<SID>_HDB<instance>_readonly \
    ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>
    • Replace <SID> with your HANA SID.
    • Replace <instance> with your HANA instance number.
    • Replace <address>, <netmask> and <device> with the details of your read-only secondary virtual IP address.
  2. Create a location constraint rule to ensure that the secondary virtual IP is assigned to the secondary instance during normal operations:

    [root]# pcs constraint location rsc_vip_<SID>_HDB<instance>_readonly \
    rule score=INFINITY master-rsc_SAPHanaCon_<SID>_HDB<instance> eq 100 \
    and hana_<sid>_clone_state eq DEMOTED
    • Replace <SID> with your HANA SID.
    • Replace <sid> with the lower-case HANA SID.
    • Replace <instance> with your HANA instance number.
  3. Create a location constraint rule to ensure that the secondary virtual IP runs on the primary instance as an alternative whenever necessary:

    [root]# pcs constraint location rsc_vip_<SID>_HDB<instance>_readonly \
    rule score=2000 master-rsc_SAPHanaCon_<SID>_HDB<instance> eq 150 \
    and hana_<sid>_clone_state eq PROMOTED

Verification

  • Check that the constraints are part of the cluster configuration:

    [root]#  pcs constraint location
    Location Constraints:
      resource 'rsc_vip_RH1_HDB02_readonly'
        Rules:
          Rule: boolean-op=and score=INFINITY
            Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 100
            Expression: hana_rh1_clone_state eq DEMOTED
      resource 'rsc_vip_RH1_HDB02_readonly'
        Rules:
          Rule: boolean-op=and score=2000
            Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 150
            Expression: hana_rh1_clone_state eq PROMOTED
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top