Este conteúdo não está disponível no idioma selecionado.
Chapter 8. Finishing the setup
Ensure that the final setup is complete and the systems and resources are healthy, then you can enable the environment for production workloads.
8.1. Enabling the automatic registration of HANA after a takeover Copiar o linkLink copiado para a área de transferência!
If you want a previously failed primary site to automatically recover as a fully functional secondary site without manual verification of the data consistency, you can enable the SAPHanaController resource to re-register the site right after a takeover.
This enables the previously failed primary site to continue the HANA system replication and automatically take over again in the event of a new failure of the new primary site.
Your HANA operator must decide if they first require to manually check the health of the previously failed instance and re-register the HANA site afterwards, or if the priority is on a faster automatic recovery of the full high availability.
Procedure
Update the
SAPHanaControllerresource and override the defaultAUTOMATED_REGISTER:[root]# pcs resource update rsc_SAPHanaCon_<SID>_HDB<instance> AUTOMATED_REGISTER=true
Verification
Check that
AUTOMATED_REGISTERis set totrue:[root]# pcs resource config rsc_SAPHanaCon_RH1_HDB02 Resource: rsc_SAPHanaCon_RH1_HDB02 (class=ocf provider=heartbeat type=SAPHanaController) Attributes: rsc_SAPHanaCon_RH1_HDB02-instance_attributes AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=7200 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1 ...
Setting AUTOMATED_REGISTER to true can potentially increase the risk of data loss or corruption. If the HA cluster triggers a takeover when the data on the secondary HANA site is not fully in sync, the automatic registration of the old primary HANA site as the new secondary HANA site results in data loss on this site and any data that was not synced before the takeover occurred is lost as well.
For more information, see the article on the SAP Technology Blog for Members: Be Prepared for Using Pacemaker Cluster for SAP HANA – Part 2: Failure of Both Nodes.
8.2. Reviewing the final cluster state Copiar o linkLink copiado para a área de transferência!
After the configuration of a 4-node cluster for a scale-out HANA system replication setup, the status looks like in the below example.
Your cluster state may deviate from the example, depending on your setup of optional or platform dependent resources, like the individual fencing or VIP resources.
Also, you can decide if you want to disable the cluster service, so that it does not start automatically on system boot. This requires manual intervention on every system boot, but allows you more control and supervision for the startup.
[root]# pcs status --full
Cluster name: hana-scaleout-cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: dc2hana1 (3) (version 2.1.9-1.2.el9_6-49aab9983) - partition with quorum
* Last updated: Mon Sep 15 16:10:09 2025 on dc1hana2
* Last change: Mon Sep 15 16:10:01 2025 by root via root on dc1hana1
* 4 nodes configured
* 15 resource instances configured
Node List:
* Node dc1hana1 (1): online, feature set 3.19.6
* Node dc1hana2 (2): online, feature set 3.19.6
* Node dc2hana1 (3): online, feature set 3.19.6
* Node dc2hana2 (4): online, feature set 3.19.6
Full List of Resources:
* Clone Set: cln_SAPHanaTop_RH1_HDB02 [rsc_SAPHanaTop_RH1_HDB02]:
* rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc1hana1
* rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc1hana2
* rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc2hana1
* rsc_SAPHanaTop_RH1_HDB02 (ocf:heartbeat:SAPHanaTopology): Started dc2hana2
* Clone Set: cln_SAPHanaCon_RH1_HDB02 [rsc_SAPHanaCon_RH1_HDB02] (promotable):
* rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Promoted dc1hana1
* rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc1hana2
* rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc2hana1
* rsc_SAPHanaCon_RH1_HDB02 (ocf:heartbeat:SAPHanaController): Unpromoted dc2hana2
* Clone Set: cln_SAPHanaFil_RH1_HDB02 [rsc_SAPHanaFil_RH1_HDB02]:
* rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc1hana1
* rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc1hana2
* rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc2hana1
* rsc_SAPHanaFil_RH1_HDB02 (ocf:heartbeat:SAPHanaFilesystem): Started dc2hana2
* rsc_vip_RH1_HDB02_primary (ocf:heartbeat:IPaddr2): Started dc1hana1
* rsc_vip_RH1_HDB02_readonly (ocf:heartbeat:IPaddr2): Started dc2hana1
Node Attributes:
* Node: dc1hana1 (1):
* hana_rh1_clone_state : PROMOTED
* hana_rh1_roles : master1:master:worker:master
* hana_rh1_site : DC1
* hana_rh1_sra : -
* hana_rh1_srah : -
* hana_rh1_version : 2.00.083.00
* hana_rh1_vhost : dc1hana1
* master-rsc_SAPHanaCon_RH1_HDB02 : 150
* Node: dc1hana2 (2):
* hana_rh1_clone_state : DEMOTED
* hana_rh1_roles : slave:slave:worker:slave
* hana_rh1_site : DC1
* hana_rh1_srah : -
* hana_rh1_version : 2.00.083.00
* hana_rh1_vhost : dc1hana2
* master-rsc_SAPHanaCon_RH1_HDB02 : -10000
* Node: dc2hana1 (3):
* hana_rh1_clone_state : DEMOTED
* hana_rh1_roles : master1:master:worker:master
* hana_rh1_site : DC2
* hana_rh1_sra : -
* hana_rh1_srah : -
* hana_rh1_version : 2.00.083.00
* hana_rh1_vhost : dc2hana1
* master-rsc_SAPHanaCon_RH1_HDB02 : 100
* Node: dc2hana2 (4):
* hana_rh1_clone_state : DEMOTED
* hana_rh1_roles : slave:slave:worker:slave
* hana_rh1_site : DC2
* hana_rh1_srah : -
* hana_rh1_version : 2.00.083.00
* hana_rh1_vhost : dc2hana2
* master-rsc_SAPHanaCon_RH1_HDB02 : -12200
Migration Summary:
Tickets:
PCSD Status:
dc1hana1: Online
dc1hana2: Online
dc2hana1: Online
dc2hana2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
In a healthy setup the additional cluster attributes appear like in this example:
[root]# SAPHanaSR-showAttr
Global cib-update dcid prim sec sid topology
---------------------------------------------
global 0.254.0 3 DC1 DC2 RH1 ScaleOut
Resource promotable
------------------------------------
cln_SAPHanaCon_RH1_HDB02 true
cln_SAPHanaTop_RH1_HDB02
Site lpt lss mns opMode srHook srMode srPoll srr
----------------------------------------------------------------
DC2 30 4 dc2hana1 logreplay SOK sync SOK S
DC1 1757952664 4 dc1hana1 logreplay PRIM sync PRIM P
Host clone_state roles score site sra srah version vhost
--------------------------------------------------------------------------------------------
dc1hana1 PROMOTED master1:master:worker:master 150 DC1 - - 2.00.083.00 dc1hana1
dc1hana2 DEMOTED slave:slave:worker:slave -10000 DC1 - 2.00.083.00 dc1hana2
dc2hana1 DEMOTED master1:master:worker:master 100 DC2 - - 2.00.083.00 dc2hana1
dc2hana2 DEMOTED slave:slave:worker:slave -12200 DC2 - 2.00.083.00 dc2hana2