Chapter 5. Configuring the Pacemaker cluster
5.1. Deploying the basic cluster configuration Copy linkLink copied to clipboard!
The following basic cluster setup covers the minimum steps to get started with the Pacemaker cluster setup for managing SAP HANA system replication.
For more information on settings and options for complex configurations, refer to the documentation for RHEL HA Add-On, for example, Create a high availability cluster with multiple links.
Prerequisites
- You have set up the HANA system replication environment and verified that it is working correctly.
- You have configured the RHEL High Availability repository on all systems that are going to be nodes of this cluster.
- You have verified fencing and quorum requirements according to your planned environment. For more details see HA cluster requirements.
Procedure
Install the Red Hat High Availability Add-On software packages from the High Availability repository. Choose which fence agents you want to install and execute the installation on all cluster nodes.
Either install the cluster packages and all fence agents:
[root]# dnf install pcs pacemaker fence-agents-allOr install the cluster packages and only a specific fence agent, depending on your environment:
[root]# dnf install pcs pacemaker fence-agents-<model>
Start and enable the
pcsdservice on all cluster nodes:[root]# systemctl enable --now pcsd.serviceOptional: If you are running the
firewalldservice, enable the ports that are required by the Red Hat High Availability Add-On. Run this on all cluster nodes:[root]# firewall-cmd --add-service=high-availability [root]# firewall-cmd --runtime-to-permanentSet a password for the user
hacluster. Repeat the command on each node using the same password:[root]# passwd haclusterAuthenticate the user
haclusterfor each node in the cluster. Run this on the first node:[root]# pcs host auth <node1> <node2> Username: hacluster Password: <node1>: Authorized <node2>: Authorized-
Enter the node names with or without FQDN, as defined in the
/etc/hostsfile. -
Enter the
haclusteruser password in the prompt.
-
Enter the node names with or without FQDN, as defined in the
Create the cluster with a name and provide the names of the cluster members, for example
node1andnode2with fully qualified host names. This propagates the cluster configuration on both nodes and starts the cluster. Run this command on the first node:[root]# pcs cluster setup <cluster_name> --start <node1> <node2> No addresses specified for host 'node1', using 'node1' No addresses specified for host 'node2', using 'node2' Destroying cluster on hosts: 'node1', 'node2'... node2: Successfully destroyed cluster node1: Successfully destroyed cluster Requesting remove 'pcsd settings' from 'node1', 'node2' node1: successful removal of the file 'pcsd settings' node2: successful removal of the file 'pcsd settings' Sending 'corosync authkey', 'pacemaker authkey' to 'node1', 'node2' node1: successful distribution of the file 'corosync authkey' node1: successful distribution of the file 'pacemaker authkey' node2: successful distribution of the file 'corosync authkey' node2: successful distribution of the file 'pacemaker authkey' Sending 'corosync.conf' to 'node1', 'node2' node1: successful distribution of the file 'corosync.conf' node2: successful distribution of the file 'corosync.conf' Cluster has been successfully set up. Starting cluster on hosts: 'node1', 'node2'...Enable the cluster to be started automatically on system start, which enables the
corosyncandpacemakerservices. Skip this step if you prefer to manually control the start of the cluster after a node restarts. Run on one node:[root]# pcs cluster enable --all node1: Cluster Enabled node2: Cluster Enabled
Verification
Check the cluster status. Verify that the cluster daemon services are in the desired state:
[root]# pcs status --full Cluster name: node1-node2-cluster WARNINGS: No stonith devices and stonith-enabled is not false Cluster Status: Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node1 (version ) - partition with quorum * Last updated: * on node1 * Last change: ** by hacluster via hacluster on node1 * 2 nodes configured * 0 resource instances configured ... PCSD Status: node1: Online node2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Next steps
- Configure a fencing method to enable the STONITH mechanism. See Configuring fencing in a Red Hat High Availability cluster.
- Test the fencing setup before you proceed with further configuration of the cluster. For more information, see How to test fence devices and fencing configuration in a Red Hat High Availability cluster?
5.2. Configuring general cluster properties Copy linkLink copied to clipboard!
You must adjust cluster resource defaults to avoid unnecessary failovers of the resources.
Procedure
Update cluster resource defaults to avoid unnecessary failovers of the resources. Run the command on one cluster node to apply the change to the cluster configuration:
[root]# pcs resource defaults update \ resource-stickiness=1000 \ migration-threshold=5000-
resource-stickiness=1000encourages the resource to stay running where it is. This prevents the cluster from moving the resources based on light internal health indicators. -
migration-threshold=5000enables the resource to be restarted on a node after 5000 failures. After exceeding this limit, the resource is blocked on the node until the failure has been cleared. This allows resource recovery after a few failures until an administrator can investigate the cause of the repeated failure and reset the counter.
-
Verification
Check that the resource defaults are set:
[root]# pcs resource defaults Meta Attrs: build-resource-defaults migration-threshold=5000 resource-stickiness=1000
5.3. Configuring the systemd-based SAP startup framework Copy linkLink copied to clipboard!
Systemd integration is the default behavior of SAP HANA installations on RHEL 9 for SAP HANA 2.0 SPS07 revision 70 and newer. In HA environments you must apply additional modifications to integrate the different systemd services that are involved in the cluster setup.
Configure the pacemaker systemd service to manage the HANA instance systemd service in the correct order.
Prerequisites
You have installed the HANA instance with systemd integration and have checked on all nodes, for example:
[root]# systemctl list-units --all SAP* UNIT LOAD ACTIVE SUB DESCRIPTION SAPRH1_02.service loaded active running SAP Instance SAPRH1_02 SAP.slice loaded active active SAP Slice ...
Procedure
Create the directory
/etc/systemd/system/pacemaker.service.d/for the pacemaker service drop-in file:[root]# mkdir /etc/systemd/system/pacemaker.service.d/Create the
systemddrop-in file for the pacemaker service with the following content:[root]# cat << EOF > /etc/systemd/system/pacemaker.service.d/00-pacemaker.conf [Unit] Description=Pacemaker needs the SAP HANA instance service Wants=SAP<SID>_<instance>.service After=SAP<SID>_<instance>.service EOF-
Replace
<SID>with your HANA SID. -
Replace
<instance>with your HANA instance number.
-
Replace
Reload the
systemctldaemon to activate the drop-in file:[root]# systemctl daemon-reload- Repeat steps 1-3 on the other cluster nodes.
Verification
Check the systemd service of your HANA instance and verify that it is
loaded:[root]# systemctl status SAPRH1_02.service ● SAPRH1_02.service - SAP Instance SAPRH1_02 Loaded: loaded (/etc/systemd/system/SAPRH1_02.service; disabled; preset: disabled) Active: active (running) since xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Main PID: 5825 (sapstartsrv) Tasks: 841 Memory: 88.6G CPU: 4h 50min 2.033s CGroup: /SAP.slice/SAPRH1_02.service ├─ 5825 /usr/sap/RH1/HDB02/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1 ├─ 5986 sapstart pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1 ├─ 5993 /usr/sap/RH1/HDB02/node1/trace/hdb.sapRH1_HDB02 -d -nw -f /usr/sap/RH1/HDB02/node1/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB02_node1 ...Verify that the SAP HANA instance service is known to the pacemaker service now:
[root]# systemctl show pacemaker.service | grep SAP Wants=SAPRH1_02.service resource-agents-deps.target dbus-broker.service After=... SAPRH1_02.service rsyslog.serviceMake sure that the
SAP<SID>_<instance>.serviceis listed in theAfter=andWants=lists.
5.4. Installing the SAP HANA HA components Copy linkLink copied to clipboard!
The sap-hana-ha RPM package in the Red Hat Enterprise Linux 9 for <arch> - SAP Solutions (RPMs) repository provides resource agents and other SAP HANA specific components for setting up a HA cluster for managing HANA system replication setup.
The package sap-hana-ha is only available since RHEL 9.4. If you configure a cluster for HANA HA on an older RHEL 9 release, follow the instructions in Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On instead.
Procedure
Install the
sap-hana-hapackage on all cluster nodes:[root]# dnf install sap-hana-ha
Verification
Check on all nodes that the package is installed:
[root]# rpm -q sap-hana-ha sap-hana-ha-<version>.<release>.noarch
5.5. Configuring the HanaSR HA/DR provider for the srConnectionChanged() hook method Copy linkLink copied to clipboard!
When you configure the HANA instance in a HA cluster setup with SAP HANA 2.0 SPS0 or later, you must enable and test the SAP HANA srConnectionChanged() hook method before proceeding with the cluster setup.
Prerequisites
-
You have installed the
sap-hana-hapackage. - Your HANA instance is not yet managed by the cluster. Otherwise, use the maintenance procedure Updating the SAP HANA instances to make sure that the cluster does not interfere during the configuration of the hook scripts.
Procedure
Stop the HANA instances on all nodes as the
<sid>admuser:rh1adm $ HDB stopVerify as
<sid>admon all nodes that the HANA instances are stopped completely:rh1adm $ sapcontrol -nr ${TINSTANCE} -function GetProcessList | column -s ',' -t ... name description dispstatus textstatus starttime elapsedtime pid hdbdaemon HDB Daemon GRAY Stopped 5381Change to the HANA configuration directory, as the
<sid>admuser, using the command aliascdcoc, which is built into the<sid>admuser shell. This automatically changes into the/hana/shared/<SID>/global/hdb/custom/config/path:rh1adm $ cdcocUpdate the
global.inifile of the SAP HANA instance to configure theHanaSRhook. Edit the configuration file on all HANA instance nodes and add the following configuration:[ha_dr_provider_hanasr] provider = HanaSR path = /usr/share/sap-hana-ha/ execution_order = 1 [trace] ha_dr_hanasr = infoSet
execution_orderto1to ensure that theHanaSRhook is always executed with the highest priority.Optional: When you also want to configure the optional
ChkSrvhook for taking action on anhdbindexserverfailure, you can add the changes to theglobal.iniat the same time, see step 1 in Configuring the ChkSrv HA/DR provider for the srServiceStateChanged() hook method:[ha_dr_provider_hanasr] provider = HanaSR path = /usr/share/sap-hana-ha/ execution_order = 1 [ha_dr_provider_chksrv] provider = ChkSrv path = /usr/share/sap-hana-ha/ execution_order = 2 action_on_lost = stop [trace] ha_dr_hanasr = info ha_dr_chksrv = infoCreate the file
/etc/sudoers.d/20-saphana, as therootuser, on each cluster node with the following content. These command privileges allow the<sid>admuser to update certain cluster node attributes as part of the HanaSR hook execution:[root]# visudo -f /etc/sudoers.d/20-saphana <sid>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_* Defaults:<sid>adm !requiretty-
For further information on why the
Defaultssetting is needed, refer to The srHook attribute is set to SFAIL in a Pacemaker cluster managing SAP HANA system replication, even though replication is in a healthy state.
-
For further information on why the
Start the HANA instances on both cluster nodes manually without starting the HA cluster. Run as
<sid>adm:rh1adm $ HDB start
Verification
Change to the SAP HANA directory, as the
<sid>admuser, where trace log files are stored. Use the command aliascdtrace, which is built into the<sid>admuser shell:rh1adm $ cdtraceCheck the HANA nameserver service logs for the HA/DR provider loading message:
If only the
HanaSRprovider is configured:rh1adm $ grep -he "loading HA/DR Provider.*" nameserver_* loading HA/DR Provider 'HanaSR' from /usr/share/sap-hana-ha/If the optional
ChkSrvprovider is also implemented:rh1adm $ grep -he "loading HA/DR Provider.*" nameserver_* loading HA/DR Provider 'ChkSrv' from /usr/share/sap-hana-ha/ loading HA/DR Provider 'HanaSR' from /usr/share/sap-hana-ha/
Verify as user
rootin the system secure log that thesudocommand executed successfully. If the sudoers file is not correct, an error is logged when thesudocommand is executed:[root]# grep -e 'sudo.*crm_attribute.*' /var/log/secure sudo[4267]: rh1adm : PWD=/hana/shared/RH1/HDB02/node1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SFAIL -t crm_config -s SAPHanaSR sudo[4319]: rh1adm : PWD=/hana/shared/RH1/HDB02/node1 ; USER=root ; COMMAND=/usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SOK -t crm_config -s SAPHanaSRAfter the HANA instance starts on both nodes, you can usually see several
srHookattribute updates. At first it is settingSFAIL, because immediately after the primary starts, it is not yet in sync with the secondary, which is still synchronizing at this time.The last update to
SOKis triggered by the HANA event after the system replication status finally changes to fully in sync.-
Repeat the verification steps 1-2 on the second instance, if not already done at the same time. The
sudolog messages of step 3 are only visible on the primary instance node where the system replication events are sent. Check the cluster attributes on any node and verify that the value for the
hana_<sid>_site_srHook_<DC2>attribute is updated as expected:[root]# cibadmin --query | grep -e 'SAPHanaSR.*srHook' <nvpair id="SAPHanaSR-hana_rh1_site_srHook_DC2" name="hana_rh1_site_srHook_DC2" value="SOK"/>-
SOKis set when the HANA system replication is inACTIVEstate, which means established and fully in sync. -
SFAILis set when the system replication is in any other state.
-
Troubleshooting
5.6. Configuring the ChkSrv HA/DR provider for the srServiceStateChanged() hook method Copy linkLink copied to clipboard!
You can configure the hook ChkSrv if you want the HANA instance to be stopped or killed for faster recovery after an indexserver process has failed. This configuration is optional.
Prerequisites
-
You have installed the
sap-hana-hapackage. -
You have configured the
HanaSRHA/DR provider. For more information see Configuring the HanaSR HA/DR provider for the srConnectionChanged() hook method.
Procedure
Change to the HANA configuration directory as the
<sid>admuser. Use the command aliascdcoc, which is built into the<sid>admuser shell. This automatically changes into the/hana/shared/<SID>/global/hdb/custom/config/path:rh1adm $ cdcocUpdate the
global.inifile of the HANA instance to configure the hook script. Edit the configuration file on all HANA instance systems and add the following content in addition to theHanaSRprovider definition:[ha_dr_provider_chksrv] provider = ChkSrv path = /usr/share/sap-hana-ha/ execution_order = 2 action_on_lost = stop [trace] ha_dr_hanasr = info ha_dr_chksrv = infoOptional: If you want to use the
fenceoption foraction_on_lost, you must add theSAPHanaSR-hookHelperto the sudo configuration of the<sid>admuser:[root]# visudo -f /etc/sudoers.d/20-saphana … <sid>adm ALL=(ALL) NOPASSWD: /usr/bin/SAPHanaSR-hookHelperOptional: Activate the
ChkSrvprovider while HANA is running by reloading the HA/DR providers. Skip this step when configuring the hook script while the instance is down, the HA/DR provider is loaded automatically at the next instance start.rh1adm $ hdbnsutil -reloadHADRProviders
Verification
Change to the SAP HANA directory, as the
<sid>admuser, where trace log files are stored. Use the command aliascdtrace, which is built into the<sid>admuser shell:rh1adm $ cdtraceCheck that the changes are loaded:
rh1adm $ grep -e "loading HA/DR Provider.*ChkSrv.*" nameserver_* loading HA/DR Provider 'ChkSrv' from /usr/share/sap-hana-ha/Check that the dedicated trace log file is created and the provider loaded with the correct configuration parameters:
rh1adm $ cat nameserver_chksrv.trc init called ChkSrv.init() version 1.001.1, parameter info: action_on_lost=stop stop_timeout=20 kill_signal=9
Troubleshooting
5.7. Creating the HANA cluster resources Copy linkLink copied to clipboard!
You must configure both the SAPHanaTopology and SAPHanaController resources so that the cluster can collect the status of the HANA landscape, monitor the instance health and take action to manage the instance when required.
The SAPHanaFilesystem resource is optional, you can add it for improving the time to action in case the filesystem of the primary instance becomes unavailable.
Procedure
Create the
SAPHanaTopologyresource as a clone resource, which means it runs on all cluster nodes at the same time:[root]# pcs resource create rsc_SAPHanaTop_<SID>_HDB<instance> \ ocf:heartbeat:SAPHanaTopology \ SID=<SID> \ InstanceNumber=<instance> \ op start timeout=600 \ op stop timeout=300 \ op monitor interval=30 timeout=300 \ clone cln_SAPHanaTop_<SID>_HDB<instance> \ meta clone-max=2 clone-node-max=1 interleave=true --future-
Replace
<SID>with your HANA SID. Replace
<instance>with your HANA instance number.NoteSince RHEL 9.3, a deprecation warning is displayed when not using the
metakeyword in the clone command and the attributes are automatically assigned to the base resource.In the future, the
metakeyword for clone attributes will be required for assigning attributes to the clone resource. Until then, add the--futureparameter to already apply this behavior now.See also New pcs parsing requires meta keyword when specifying clone meta attributes.
-
Replace
Create the
SAPHanaControllerresource as a promotable clone resource. This means it runs on all cluster nodes at the same time, but on one node it functions as the active, or primary, instance:[root]# pcs resource create rsc_SAPHanaCon_<SID>_HDB<instance> \ ocf:heartbeat:SAPHanaController \ SID=<SID> \ InstanceNumber=<instance> \ PREFER_SITE_TAKEOVER=true \ DUPLICATE_PRIMARY_TIMEOUT=7200 \ AUTOMATED_REGISTER=false \ op stop timeout=3600 \ op monitor interval=59 role=Promoted timeout=700 \ op monitor interval=61 role=Unpromoted timeout=700 \ meta priority=100 \ promotable cln_SAPHanaCon_<SID>_HDB<instance> \ meta clone-max=2 clone-node-max=1 interleave=true --futureWe recommend creating the resource with
AUTOMATED_REGISTER=falseand then verify the correct behavior and data consistency through tests to complete the setup. For more information, see Testing the setup. You can enable this already at creation time by setting the parameter to true.See SAPHanaController resource parameters for more details.
Optional: If you want the cluster to fence the node when the
SAPHanaControllerresource fails, then update the resource with theON_FAIL_ACTIONparameter and set it tofence:[root]# pcs resource update rsc_SAPHanaCon_<SID>_HDB<instance> ON_FAIL_ACTION=fenceYou must start
SAPHanaTopologybeforeSAPHanaController, because it collects HANA landscape information, which theSAPHanaControllerresource requires to start correctly. Create the cluster constraint that enforces the correct start order of the two resources:[root]# pcs constraint order cln_SAPHanaTop_<SID>_HDB<instance> \ then cln_SAPHanaCon_<SID>_HDB<instance> symmetrical=falseSetting
symmetrical=falseindicates that the constraint only influences the start order of the resources, but it does not apply to the stop order.Optional: Create the
SAPHanaFilesystemresource as a clone resource:[root]# pcs resource create rsc_SAPHanaFil_<SID>_HDB<instance> \ ocf:heartbeat:SAPHanaFilesystem \ SID=<SID> \ InstanceNumber=<instance> \ ON_FAIL_ACTION="fence" \ op start interval=0 timeout=10 \ op stop interval=0 timeout=20 \ op monitor interval=120 timeout=120 \ clone cln_SAPHanaFil_<SID>_HDB<instance> \ meta clone-node-max=1 interleave=true --futureInstead of setting
ON_FAIL_ACTION=fenceyou can set it toignore. This can be useful for testing the functionality first. The resource writes information to the system logs, which can be examined for evaluating if the resource would take the wanted action when activated to use thefenceaction.
Verification
Review the
SAPHanaTopologyresource clone. Example resource configuration:[root]# pcs resource config cln_SAPHanaTop_RH1_HDB02 Clone: cln_SAPHanaTop_RH1_HDB02 Meta Attributes: cln_SAPHanaTop_RH1_HDB02-meta_attributes clone-max=2 clone-node-max=1 interleave=true Resource: rsc_SAPHanaTop_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaTopology) Attributes: rsc_SAPHanaTop_RH1_HDB02-instance_attributes InstanceNumber=02 SID=RH1 Operations: methods: rsc_SAPHanaTop_RH1_HDB02-methods-interval-0s interval=0s timeout=5 monitor: rsc_SAPHanaTop_RH1_HDB02-monitor-interval-30 interval=30 timeout=300 reload: rsc_SAPHanaTop_RH1_HDB02-reload-interval-0s interval=0s timeout=5 start: rsc_SAPHanaTop_RH1_HDB02-start-interval-0s interval=0s timeout=600 stop: rsc_SAPHanaTop_RH1_HDB02-stop-interval-0s interval=0s timeout=300Review the
SAPHanaControllerresource clone. Example resource configuration:[root]# pcs resource config cln_SAPHanaCon_RH1_HDB02 Clone: cln_SAPHanaCon_RH1_HDB02 Meta Attributes: cln_SAPHanaCon_RH1_HDB02-meta_attributes clone-max=2 clone-node-max=1 interleave=true promotable=true Resource: rsc_SAPHanaCon_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaController) Attributes: rsc_SAPHanaCon_RH1_HDB02-instance_attributes AUTOMATED_REGISTER=false DUPLICATE_PRIMARY_TIMEOUT=7200 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1 Meta Attributes: rsc_SAPHanaCon_RH1_HDB02-meta_attributes priority=100 Operations: demote: rsc_SAPHanaCon_RH1_HDB02-demote-interval-0s interval=0s timeout=320 methods: rsc_SAPHanaCon_RH1_HDB02-methods-interval-0s interval=0s timeout=5 monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-59 interval=59 timeout=700 role=Promoted monitor: rsc_SAPHanaCon_RH1_HDB02-monitor-interval-61 interval=61 timeout=700 role=Unpromoted promote: rsc_SAPHanaCon_RH1_HDB02-promote-interval-0s interval=0s timeout=900 reload: rsc_SAPHanaCon_RH1_HDB02-reload-interval-0s interval=0s timeout=5 start: rsc_SAPHanaCon_RH1_HDB02-start-interval-0s interval=0s timeout=3600 stop: rsc_SAPHanaCon_RH1_HDB02-stop-interval-0s interval=0s timeout=3600Check that the start order constraint is in place:
[root]# pcs constraint order Order Constraints: start resource 'cln_SAPHanaTop_RH1_HDB02' then start resource 'cln_SAPHanaCon_RH1_HDB02' symmetrical=0Optional: Review the
SAPHanaFilesystemresource clone if you created it. Example resource configuration:[root]# pcs resource config cln_SAPHanaFil_RH1_HDB02 Clone: cln_SAPHanaFil_RH1_HDB02 Meta Attributes: cln_SAPHanaFil_RH1_HDB02-meta_attributes clone-node-max=1 interleave=true Resource: rsc_SAPHanaFil_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaFilesystem) Attributes: rsc_SAPHanaFil_RH1_HDB02-instance_attributes InstanceNumber=02 ON_FAIL_ACTION=fence SID=RH1 Operations: methods: rsc_SAPHanaFil_RH1_HDB02-methods-interval-0s interval=0s timeout=5 monitor: rsc_SAPHanaFil_RH1_HDB02-monitor-interval-120 interval=120 timeout=120 reload: rsc_SAPHanaFil_RH1_HDB02-reload-interval-0s interval=0s timeout=5 start: rsc_SAPHanaFil_RH1_HDB02-start-interval-0 interval=0 timeout=10 stop: rsc_SAPHanaFil_RH1_HDB02-stop-interval-0 interval=0 timeout=20Check the cluster status. Use
--fullto include node attributes, which are updated by the HANA resources:[root]# pcs status --full ... Full List of Resources: * Clone Set: cln_SAPHanaTop_RH1_HDB02 [rsc_SAPHanaTop_RH1_HDB02]: * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started node1 * rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started node2 * Clone Set: cln_SAPHanaCon_RH1_HDB02 [rsc_SAPHanaCon_RH1_HDB02] (promotable): * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Promoted node1 * rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted node2 * Clone Set: cln_SAPHanaFil_RH1_HDB02 [rsc_SAPHanaFil_RH1_HDB02]: * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started node1 * rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started node2 Node Attributes: * Node: node1 (1): * hana_rh1_clone_state : PROMOTED * hana_rh1_roles : master1:master:worker:master * hana_rh1_site : DC1 * hana_rh1_srah : - * hana_rh1_version : 2.00.078.00 * hana_rh1_vhost : node1 * master-rsc_SAPHanaCon_RH1_HDB02 : 150 * Node: node2 (2): * hana_rh1_clone_state : DEMOTED * hana_rh1_roles : master1:master:worker:master * hana_rh1_site : DC2 * hana_rh1_srah : - * hana_rh1_version : 2.00.078.00 * hana_rh1_vhost : node2 * master-rsc_SAPHanaCon_RH1_HDB02 : 100 ...
The timeouts shown for the resource operations are only recommended defaults and can be adjusted depending on your SAP HANA environment. For example, large SAP HANA databases can take longer to start up and therefore you might have to increase the start timeout.
Setting AUTOMATED_REGISTER to true can potentially increase the risk of data loss or corruption. If the HA cluster triggers a takeover when the data on the secondary HANA instance is not fully in sync, the automatic registration of the old primary HANA instance as the new secondary HANA instance results a data loss on this instance and any data that was not synced before the takeover occurred is lost as well.
For more information, see the article on the SAP Technology Blog for Members: Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 2: Failure of Both Nodes.
5.8. Creating the virtual IP resource Copy linkLink copied to clipboard!
You must configure a virtual IP (VIP) resource for SAP clients to access the primary HANA instance independently from the cluster node it is currently running on. Configure the VIP resource to automatically move to the node where the primary instance is running.
The resource agent needed for the VIP resource depends on the platform used. We are using the IPaddr2 resource agent to demonstrate the setup.
Prerequisites
- You have reserved a virtual IP for the service.
Procedure
Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the primary virtual IP, for example using the
IPaddr2agent:[root]# pcs resource create rsc_vip_<SID>_HDB<instance>_primary \ ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>-
Replace
<SID>with your HANA SID. -
Replace
<instance>with your HANA instance number. -
Replace
<address>,<netmask>and<device>with the details of your primary virtual IP address.
-
Replace
Create a cluster constraint that places the VIP resource with the
SAPHanaControllerresource on the HANA primary node:[root]# pcs constraint colocation add rsc_vip_<SID>_HDB<instance>_primary \ with promoted cln_SAPHanaCon_<SID>_HDB<instance> 2000The constraint applies a score of
2000instead of the defaultINFINITY. This softens the resource dependency and allows the virtual IP resource to stay active in the case when there is no promotedSAPHanaControllerresource. This way it is still possible to use tools like the SAP Management Console (MMC) or SAP Landscape Management (LaMa) that can reach this IP address to query the status information of the HANA instance.
Verification
Check the resource configuration of the virtual IP resource:
[root]# pcs resource config rsc_vip_RH1_HDB02_primary Resource: rsc_vip_RH1_HDB02_primary (class=ocf provider=heartbeat type=IPaddr2) Attributes: rsc_vip_RH1_HDB02_primary-instance_attributes cidr_netmask=32 ip=192.168.1.100 nic=eth0 Operations: monitor: rsc_vip_RH1_HDB02_primary-monitor-interval-10s interval=10s timeout=20s start: rsc_vip_RH1_HDB02_primary-start-interval-0s interval=0s timeout=20s stop: rsc_vip_RH1_HDB02_primary-stop-interval-0s interval=0s timeout=20sCheck that the constraint is defined correctly:
[root]# pcs constraint colocation Colocation Constraints: Started resource 'rsc_vip_RH1_HDB02_primary' with Promoted resource 'cln_SAPHanaCon_RH1_HDB02' score=2000
5.9. Adding a secondary (read-enabled) virtual IP address Copy linkLink copied to clipboard!
To support the Active/Active (read-enabled) secondary setup you must add a second virtual IP to provide client access to the secondary SAP HANA instance.
Configure additional rules to ensure that the second virtual IP is always associated with a healthy SAP HANA instance, maximizing client access and availability.
Normal operation
When both primary and secondary SAP HANA instances are active and replication is in sync, the second virtual IP is assigned to the secondary node.
Secondary unavailable or out of sync
If the secondary instance is down or replication is not in sync, the virtual IP moves to the primary node. It automatically returns to the secondary node as soon as system replication is back in sync.
Failover scenario
In case the cluster triggers a takeover, the virtual IP remains on the same node. After the former primary node takes over the secondary role and replication is in sync again, this VIP shifts there accordingly.
Prerequisites
-
You have set
operationMode=logreplay_readaccesswhen registering the secondary SAP HANA instance for system replication with the primary.
Procedure
Use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. Adjust the parameters according to the resource agent you are using. Create the cluster resource for the secondary virtual IP, for example using the
IPaddr2agent:[root]# pcs resource create rsc_vip_<SID>_HDB<instance>_readonly \ ocf:heartbeat:IPaddr2 ip=<address> cidr_netmask=<netmask> nic=<device>-
Replace
<SID>with your HANA SID. -
Replace
<instance>with your HANA instance number. -
Replace
<address>,<netmask>and<device>with the details of your read-only secondary virtual IP address.
-
Replace
Create a location constraint rule to ensure that the secondary virtual IP is assigned to the secondary instance during normal operations:
[root]# pcs constraint location rsc_vip_<SID>_HDB<instance>_readonly \ rule score=INFINITY master-rsc_SAPHanaCon_<SID>_HDB<instance> eq 100 \ and hana_<sid>_clone_state eq DEMOTED-
Replace
<SID>with your HANA SID. -
Replace
<sid>with the lower-case HANA SID. -
Replace
<instance>with your HANA instance number.
-
Replace
Create a location constraint rule to ensure that the secondary virtual IP runs on the primary instance as an alternative whenever necessary:
[root]# pcs constraint location rsc_vip_<SID>_HDB<instance>_readonly \ rule score=2000 master-rsc_SAPHanaCon_<SID>_HDB<instance> eq 150 \ and hana_<sid>_clone_state eq PROMOTED
Verification
Check that the constraints are part of the cluster configuration:
[root]# pcs constraint location Location Constraints: resource 'rsc_vip_RH1_HDB02_readonly' Rules: Rule: boolean-op=and score=INFINITY Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 100 Expression: hana_rh1_clone_state eq DEMOTED resource 'rsc_vip_RH1_HDB02_readonly' Rules: Rule: boolean-op=and score=2000 Expression: master-rsc_SAPHanaCon_RH1_HDB02 eq 150 Expression: hana_rh1_clone_state eq PROMOTED