Chapter 4. Optional settings
4.1. Adding a secondary virtual IP address resource for Active/Active (Read-Enabled) setup
Starting with SAP HANA 2.0 SPS1, SAP allows 'Active/Active (Read Enabled)' setups for SAP HANA System Replication. This allows you to:
- Enable SAP HANA System Replication to support read access on the secondary systems.
- Execute read-intense reporting on the secondary systems to remove this workload from the primary system.
- Reduce the need for bandwidth in continuous operation.
For more information, also check SAP HANA System Replication.
A second virtual IP address is required to allow clients to access the secondary SAP HANA database. In terms of a failure, if the secondary site is not accessible, the second IP will be switched to the primary site to avoid downtime of the read-only access.
The operationMode
should be set to logreplay_readaccess
. The second virtual IP and the additional necessary constraints can be configured with the following commands:
root# pcs resource create rsc_ip2_SAPHana_RH1_HDB10 ocf:heartbeat:IPaddr2 ip=10.0.0.251 op monitor interval="10s" timeout="20s
4.1.1. Configuring additional constraints
The constraints listed above are strictly recommended. To adjust the behaviour to your environment, additional constraints are required. Examples for those are:
root# pcs constraint location rsc_ip_SAPHana_RH1_HDB10 rule score=500 role=master hana_rh1_roles eq "master1:master:worker:master" and hana_rh1_clone_state eq PROMOTED
Move the IP2
to the primary site in the event that the secondary site goes down:
root# pcs constraint location rsc_ip2_SAPHana_RH1_HDB10 rule score=50 id=vip_slave_master_constraint hana_rh1_roles eq 'master1:master:worker:master'
root# pcs constraint order promote rsc_SAPHana_RH1_HDB10-clone then start rsc_ip_SAPHana_RH1_HDB10
root# pcs constraint order start rsc_ip_SAPHana_RH1_HDB10 then start rsc_ip2_SAPHana_RH1_HDB10
root# pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB10 with Master rsc_SAPHana_RH1_HDB10-clone 2000
root# pcs constraint colocation add rsc_ip2_SAPHana_RH1_HDB10 with Slave rsc_SAPHana_RH1_HDB10-clone 5
Procedure
Test the behavior if the cluster is up an running you can run
root# watch pcs status
Stop the secondary HANA instance manually with:
sidadm% sapcontrol -nr ${TINSTANCE} -function StopSystem HDB
After a few seconds the 2nd IP address will be moved to the primary hosts. Then you can manually start the database again with:
sidadm% sapcontrol -nr ${TINSTANCE} -function StartSystem HDB
- Restart the cluster, for further usage.
4.2. Adding filesystem monitoring
Pacemaker does not actively monitor mount points unless filesystem resources manage them. In a scale-out environment, the databases can be distributed over different availability zones. Mount points can be available specific to a zone, which then needs to be specified as node attributes. If mounts should only be handled in filesystem resources, then they should be removed from /etc/fstab
. Mounts are required to run SAP HANA services, hence, before SAP HANA services start, order constraints must ensure that filesystems are mounted. For further information, check How do I configure SAP HANA Scale-Out System Replication in a Pacemaker cluster when the HANA filesystems are on NFS shares?.
4.2.1. Filesystem resource example
An example configuration looks like this:
Listing pcs node
attribute:
[root@dc1hana01]# pcs node attribute
Node Attributes:
saphdb1: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb1
saphdb2: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb2
saphdb3: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb3
saphdb4: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb4
Please note that pcs node
attribute and saphdb1 hana_hdb_site=DC1
attribute names are in lower-case.
Assuming we have the current configuration:
-
SID=RH1
-
Instance_Number=10
Node | AZ | Attribute | Value |
dc1hana01 | DC1 | NFS_SHARED_RH1_SITE | DC1 |
dc1hana02 | DC1 | NFS_SHARED_RH1_SITE | DC1 |
dc2hana01 | DC2 | NFS_SHARED_RH1_SITE | DC2 |
dc2hana02 | DC2 | NFS_SHARED_RH1_SITE | DC2 |
Below is the example to set the node attributes mount points for data and logs which can be handled similarly:
[root@dc1hana01]# pcs resource create nfs_hana_shared_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_shared_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef78.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_HDB_SITE=DC1N2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_HDB_SITE=DC2N1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_HDB_SITE=DC2N2 [root@dc1hana01]# pcs node attribute sap-dc1-dbn1 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc1-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC1 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc2-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC2 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N2 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc1 *[root@dc1hana01]# pcs resource enable nfs_hana_log_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc2 [root@dc1hana01]# pcs resource update nfs_hana_shared_dc1-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_shared_dc2-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc2-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc2-clone meta clone-max=1 interleave=true
4.3. Systemd
managed SAP services
If a systemd-enabled
SAP HANA version is used (SAP HANA 2.0 SPS07 and later), a shutdown gracefully stops those services. In some environments fencing causes a shutdown and this gracefully stops the service. In some cases, the pacemaker might not work as expected.
If you add drop-in files, then it prevents the service from stopping, for example - /etc/systemd/system/resource-agents-deps.target.d/sap_systemd_hdb_00.conf
. You can also use other filenames.
root@saphdb1:/etc/systemd/system/resource-agents-deps.target.d# more sap_systemd_hdb_00.conf
[Unit]
Description=Pacemaker SAP resource HDB_00 needs the SAP Host Agent service
Wants=saphostagent.service
After=saphostagent.service
Wants=SAPHDB_00.service
After=SAPHDB_00.service
These files need to be activated. Use the following command:
[root]# systemctl daemon-reload
For further information please check Why does the stop
operation of a SAPHana resource agent fail when the systemd
based SAP startup framework is enabled?.
4.4. Additional hooks
Above, you have configured the srConnectionChanged()
hook. You can also use an additional hook for srServiceStateChanged()
to manage changes of the hdbindexserver
processes of SAP HANA instances.
Perform the steps given below to activate the srServiceStateChanged()
hook for each SAP HANA instance on all HA cluster nodes.
This solution is Technology Preview. Red Hat Global Support Services may create bug reports on behalf of subscribed customers who are creating support cases.
Procedure
Update the SAP HANA
global.ini
file on each node to enable use of the hook script by both SAP HANA instances (e.g., in file/hana/shared/RH1/global/hdb/custom/config/global.ini
):[ha_dr_provider_chksrv] path = /usr/share/SAPHanaSR-ScaleOut execution_order = 2 action_on_lost = stop [trace] ha_dr_saphanasr = info ha_dr_chksrv = info
Set the optional parameters as shown below:
-
action_on_lost
(default: ignore) -
stop_timeout
(default: 20) kill_signal
(default: 9)Below is an explanation of the available options for
action_on_lost
:-
ignore
: This enables the feature, but only logs events. This is useful for monitoring the hook’s activity in the configured environment. -
stop
: This executes a gracefulsapcontrol -nr <nr> -function StopSystem
. kill
: This executesHDB kill-<signal>
for the fastest stop.Notestop_timeout
is added to the command execution of the stop and kill actions, andkill_signal
is used in the kill action as part of theHDB kill-<signal>
command.
-
Reload the
HA/DR
providers to activate the new hook while HANA is running:[rh1adm]$ hdbnsutil -reloadHADRProviders
Check the new trace file to verify the hook initialization:
[rh1adm]$ cdtrace [rh1adm]$ cat nameserver_chksrv.trc
For more information, check Implementing a HA/DR Provider.