Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Useful commands
Below are 3 sections of useful commands. In most cases, it should help to verify successful operation or configuration. Examples are listed together with the response. In some cases, the output has been adjusted for formatting reasons.
-
All commands listed in this document start with
>, when<sid>admuser executes the commands. -
All commands listed in this document start with
#, whenroot userexecutes the commands. -
To execute the commands, omit the prefix
>or#.
6.1. SAP HANA commands Link kopierenLink in die Zwischenablage kopiert!
The <sid>adm user executes the SAP HANA commands. Example:
[root@az1n1]# su - rh2adm
az1n1:rh2adm> cdpy
az1n1:rh2adm> pwd
/usr/sap/RH2/HDB02/exe/python_support
az1n1:rh2adm> python systemReplicationStatus.py -h
systemReplicationStatus.py [-h|--help] [-a|--all] [-l|--localhost] [-m|--multiTaget] [-s|--site=<site name>] [-t|--printLandscapeTree] [--omitSecondaryActiveStatus] [--sapcontrol=1]
az1n1:rh2adm> python landscapeHostConfiguration.py -h
landscapeHostConfiguration.py [-h|--help] [--localhost] [--sapcontrol=1]
az1n1:rh2adm> hdbnsutil # run hdbnsutil without parameters to get help
6.1.1. Installing SAP HANA using hdbclm Link kopierenLink in die Zwischenablage kopiert!
The installation of the third site is similar to the installation of the second site. The installation can be done with hdblcm as user root. To ensure that nothing is installed before, run hdbuninst to check if SAP HANA is not already installed on this node.
Example output of HANA uninstallation:
[root@az3n1]# cd /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64
root@DC3/software/DATA_UNITS/HDB_SERVER_LINUX_X86_64# ./hdbuninst
Option 0 will remove an already existing HANA Installation
No SAP HANA Installation found is the expected answer
Example output of HANA installation on DC3:
[root@az3n1]# cd /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64
# ./hdbuninst
Option 0 will remove an already existing HANA Installation
No SAP HANA Installation found is the expected answer
Example output of HANA installation:
[root@az3n1]# ./hdblcm
1 install
2 server
/hana/shared is default directory
Enter Local Hostname [az3n1]: use the default name
additional hosts only during Scale-Out Installation y default is n
ENTER SAP HANA System ID: RH2
Enter Instance Number [02]:
Enter Local Host Worker Group [default]:
Select System Usage / Enter Index [4]:
Choose encryption
Enter Location of Data Volumes [/hana/data/RH2]:
Enter Location of Log Volumes [/hana/log/RH2]:
Restrict maximum memory allocation? [n]:
Enter Certificate Host Name
Enter System Administrator (rh2adm) Password: <Y0urPasswd>
Confirm System Administrator (rh2adm) Password: <Y0urPasswd>
Enter System Administrator Home Directory [/usr/sap/RH2/home]:
Enter System Administrator Login Shell [/bin/sh]:
Enter System Administrator User ID [1000]:
Enter System Database User (SYSTEM) Password: <Y0urPasswd>
Confirm System Database User (SYSTEM) Password: <Y0urPasswd>
Restart system after machine reboot? [n]:
Before the installation starts, a summary is listed:
SAP HANA Database System Installation
Installation Parameters
Remote Execution: ssh
Database Isolation: low
Install Execution Mode: standard
Installation Path: /hana/shared
Local Host Name: dc3host
SAP HANA System ID: RH2
Instance Number: 02
Local Host Worker Group: default
System Usage: custom
Location of Data Volumes: /hana/data/RH2
Location of Log Volumes: /hana/log/RH2
SAP HANA Database secure store: ssfs
Certificate Host Names: az3n1 -> az3n1 System Administrator Home Directory: /usr/sap/RH2/home
System Administrator Login Shell: /bin/sh
System Administrator User ID: 1000
ID of User Group (sapsys): 1010
Software Components
SAP HANA Database
Install version 2.00.052.00.1599235305
Location: /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server
SAP HANA Local Secure Store
Do not install
SAP HANA AFL (incl.PAL,BFL,OFL)
Do not install
SAP HANA EML AFL
Do not install
SAP HANA EPM-MDS
Do not install
SAP HANA Database Client
Do not install
SAP HANA Studio
Do not install
SAP HANA Smart Data Access
Do not install
SAP HANA XS Advanced Runtime
Do not install
Log File Locations
Log directory: /var/tmp/hdb_RH2_hdblcm_install_2021-06-09_18.48.13
Trace location: /var/tmp/hdblcm_2021-06-09_18.48.13_31307.trc
Do you want to continue? (y/n):
Enter y to start the installation.
6.1.2. Checking Inifile contents using hdbsql Link kopierenLink in die Zwischenablage kopiert!
az1n1:rh2adm> hdbsql -i ${TINSTANCE} -u system -p Y0urP8ssw0rd
Welcome to the SAP HANA Database interactive terminal.
Type: \h for help with commands
\q to quit
hdbsql RH2=> select * from M_INIFILE_CONTENTS where section='system_replication'
FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE
"global.ini","DEFAULT","","","system_replication","actual_mode","primary"
"global.ini","DEFAULT","","","system_replication","mode","primary"
"global.ini","DEFAULT","","","system_replication","operation_mode","logreplay"
"global.ini","DEFAULT","","","system_replication","register_secondaries_on_takeover
","true"
"global.ini","DEFAULT","","","system_replication","site_id","1"
"global.ini","DEFAULT","","","system_replication","site_name","DC2"
"global.ini","DEFAULT","","","system_replication","timetravel_logreplay_mode","auto
"
"global.ini","DEFAULT","","","system_replication","alternative_sources",""
"global.ini","DEFAULT","","","system_replication","datashipping_logsize_threshold",
"5368709120"
"global.ini","DEFAULT","","","system_replication","datashipping_min_time_interval",
"600"
"global.ini","DEFAULT","","","system_replication","datashipping_parallel_channels",
"4"
"global.ini","DEFAULT","","","system_replication","datashipping_parallel_processing
","true"
"global.ini","DEFAULT","","","system_replication","datashipping_snapshot_max_retent
ion_time","300"
"global.ini","DEFAULT","","","system_replication","enable_data_compression","false"
"global.ini","DEFAULT","","","system_replication","enable_full_sync","false"
"global.ini","DEFAULT","","","system_replication","enable_log_compression","false"
"global.ini","DEFAULT","","","system_replication","enable_log_retention","auto"
"global.ini","DEFAULT","","","system_replication","full_replica_on_failed_delta_syn
c_check","false"
"global.ini","DEFAULT","","","system_replication","hint_based_routing_site_name",""
"global.ini","DEFAULT","","","system_replication","keep_old_style_alert","false"
"global.ini","DEFAULT","","","system_replication","logshipping_async_buffer_size","
67108864"
"global.ini","DEFAULT","","","system_replication","logshipping_async_wait_on_buffer
_full","true"
"global.ini","DEFAULT","","","system_replication","logshipping_max_retention_size",
"1048576"
"global.ini","DEFAULT","","","system_replication","logshipping_replay_logbuffer_cac
he_size","1073741824"
"global.ini","DEFAULT","","","system_replication","logshipping_replay_push_persiste
nt_segment_count","5"
"global.ini","DEFAULT","","","system_replication","logshipping_snapshot_logsize_thr
eshold","3221225472"
"global.ini","DEFAULT","","","system_replication","logshipping_snapshot_min_time_in
terval","900"
"global.ini","DEFAULT","","","system_replication","logshipping_timeout","30"
"global.ini","DEFAULT","","","system_replication","preload_column_tables","true"
"global.ini","DEFAULT","","","system_replication","propagate_log_retention","off"
"global.ini","DEFAULT","","","system_replication","reconnect_time_interval","30"
"global.ini","DEFAULT","","","system_replication","retries_before_register_to_alter
native_source","20"
"global.ini","DEFAULT","","","system_replication","takeover_esserver_without_log_ba
ckup","false"
"global.ini","DEFAULT","","","system_replication","takeover_wait_until_esserver_res
tart","true"
"global.ini","DEFAULT","","","system_replication","timetravel_call_takeover_hooks",
"off"
"global.ini","DEFAULT","","","system_replication","timetravel_log_retention_policy"
,"none"
"global.ini","DEFAULT","","","system_replication","timetravel_max_retention_time","
0"
"global.ini","DEFAULT","","","system_replication","timetravel_snapshot_creation_int
erval","1440"
"indexserver.ini","DEFAULT","","","system_replication","logshipping_async_buffer_si
ze","268435456"
"indexserver.ini","DEFAULT","","","system_replication","logshipping_replay_logbuffe
r_cache_size","4294967296"
"indexserver.ini","DEFAULT","","","system_replication","logshipping_replay_push_per
sistent_segment_count","20"
41 rows selected (overall time 1971.958 msec; server time 31.359 msec)
6.1.3. Checking database Link kopierenLink in die Zwischenablage kopiert!
Check if the database is running and discover the current primary node.
Listing database instances
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function
GetSystemInstanceList
23.06.2023 12:08:17
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
az1n1, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN
az1n2, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN
If the output is green the instance is running.
Listing database processes
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2023 09 04 14:34:01, 18:41:33, 3788067
hdbcompileserver, HDB Compileserver, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445299
hdbindexserver, HDB Indexserver-RH2, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445391
hdbnameserver, HDB Nameserver, GREEN, Running, 2023 09 04 22:35:34, 10:40:00, 445178
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445306
hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2023 09 04 22:35:53, 10:39:41, 445955
hdbxsengine, HDB XSEngine-RH2, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445394
Usually, all database processes have the status GREEN.
Listing SAP HANA processes
az1n1:rh2adm> HDB info
USER PID PPID %CPU VSZ RSS COMMAND
rh2adm 1560 1559 0.0 6420 3136 watch -n 5 sapcontrol -nr 02 -functi
rh2adm 1316 1315 0.0 8884 5676 -sh
rh2adm 2549 1316 0.0 7516 4072 \_ /bin/sh /usr/sap/RH2/HDB02/HDB i
rh2adm 2579 2549 0.0 10144 3576 \_ ps fx -U rh2adm -o user:8,pi
rh2adm 2388 1 0.0 679536 55520 hdbrsutil --start --port 30203 --vo
rh2adm 1921 1 0.0 679196 55312 hdbrsutil --start --port 30201 --vo
rh2adm 1469 1 0.0 8852 3260 sapstart pf=/usr/sap/RH2/SYS/profile
rh2adm 1476 1469 0.7 438316 86288 \_ /usr/sap/RH2/HDB02/az3n1/trace/
rh2adm 1501 1476 11.7 9690172 1574796 \_ hdbnameserver
rh2adm 1845 1476 0.8 410696 122988 \_ hdbcompileserver
rh2adm 1848 1476 1.0 659464 154072 \_ hdbpreprocessor
rh2adm 1899 1476 14.7 9848276 1765208 \_ hdbindexserver -port 30203
rh2adm 1902 1476 8.4 5023288 1052768 \_ hdbxsengine -port 30207
rh2adm 2265 1476 5.2 2340284 405016 \_ hdbwebdispatcher
rh2adm 1117 1 1.1 543532 30676 /usr/sap/RH2/HDB02/exe/sapstartsrv p
rh2adm 1029 1 0.0 20324 11572 /usr/lib/systemd/systemd --user
rh2adm 1030 1029 0.0 23256 3536 \_ (sd-pam)
Displaying SAP HANA landscape configuration
az1n1:rh2adm>
/usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/Python/bin/python
/usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/landscapeHostConfiguration.py;echo $?
| Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual |
| | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups |
| ---------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------ | ------ | ------- | ------- |
| az1n1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default |
| az1n2 | yes | ok | | | 2 | 2 | default | default | slave | slave | worker | slave | worker | worker | default | default |
overall host status: ok
Returncodes:
- 0: Fatal
- 1: Error
- 2: Warning
- 3: Info
- 4: OK
Discovering primary database
az1n1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode"
Example of check on a secondary:
az1n1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode"
mode: syncmem
primary masters: az1n1
Example of check on the current primary:
az1n1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode"
mode: primary
az1n1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Displaying the database version
Example using SQL query:
hdbsql RH2=> select * from m_database
SYSTEM_ID,DATABASE_NAME,HOST,START_TIME,VERSION,USAGE
"RH2","RH2","node1","2023-06-22 15:33:05.235000000","2.00.059.02.1647435895","CUSTOM"
1 row selected (overall time 29.107 msec; server time 927 usec)
Example using systemOverview.py:
az1n1:rh2adm> python ./systemOverview.py
| Section | Name | Status | Value |
| ---------- | --------------- | ------- | ------------------------------------------- |
| System | Instance ID | | RH1 |
| System | Instance Number | | 02 |
| System | Distributed | | Yes (2 hosts) |
| System | Version | | 2.00.064.00.1660047502 (fa/hana2sp06) |
| System | Platform | | Red Hat Enterprise Linux 9.4 (Plow) |
| Services | All Started | OK | Yes |
| Services | Min Start Time | | 2025-05-28 13:29:55.000 |
| Services | Max Start Time | | 2025-05-28 13:31:12.253 |
| Memory | Memory | OK | Physical 62.18 GB, Swap 4.00 GB, Used 23.16 |
| CPU | CPU | OK | Available 8, Used 1.40 |
| Disk | Data | ERROR | Size 149.9 GB, Used 149.9 GB, Free 0 % |
| Disk | Log | ERROR | Size 149.9 GB, Used 149.9 GB, Free 0 % |
| Disk | Trace | ERROR | Size 149.9 GB, Used 149.9 GB, Free 0 % |
| Statistics | Alerts | WARNING | cannot check statistics w/o SQL connection | |
6.1.4. Starting and stopping SAP HANA Link kopierenLink in die Zwischenablage kopiert!
Option 1: Use HDB command
az1n1:rh2adm> HDB help
Usage: /usr/sap/RH2/HDB02/HDB { start|stop|reconf|restart|version|info|proc|admin|kill|kill-<sig>|term }
kill or kill-9 should never be used in a productive environment!
Start the Database
az1n1:rh2adm> HDB startStop the database
az1n1:rh2adm> HDB stop
Option 2 (recommended): Use sapcontrol
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function StartSystem HDB
03.07.2023 14:08:30
StartSystem
OK
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function StopSystem HDB
StopSystem
OK
Use the sapcontrol GetProcessList to monitor the starting and stopping of HANA services:
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList
6.1.5. Checking SAP HANA system replication status Link kopierenLink in die Zwischenablage kopiert!
There are many ways to check the SAP HANA System Replication status:
- `az1n1:rh2adm> python systemReplicationStatus.py ` on the primary node
-
az1n1:rh2adm> echo $? #(Return code of systemReplicationStatus) -
az1n1:rh2adm> hdbnsutil -sr_state -
az1n1:rh2adm> hdbnsutil -sr_stateConfiguration
Example of systemReplicationStatus.py output running as a monitor:
az1n1:rh2adm> watch -n 5 "python
/usr/sap/${SAPSYSTEMNAME}/HDB{TINSTACE}/exe/python_support/systemReplicationStatus.py;echo \$?"
concurrent-fencing: true
Every 5.0s: python systemReplicationStatus.py;echo $? hana08: Fri Jul 28 17:01:05 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |
|SYSTEMDB |hana08 |30201 |nameserver | 1 | 1 |DC2 |hana09 | 30201 | 3 |DC3 |YES |SYNCMEM |ACTIVE | |
|RH2 |hana08 |30207 |xsengine | 2 | 1 |DC2 |hana09 | 30207 | 3 |DC3 |YES |SYNCMEM |ACTIVE | |
|RH2 |hana08 |30203 |indexserver | 3 | 1 |DC2 |hana09 | 30203 | 3 |DC3 |YES |SYNCMEM |ACTIVE | |
|SYSTEMDB |hana08 |30201 |nameserver | 1 | 1 |DC2 |az3n1 | 30201 | 2 |DC1 |YES |SYNCMEM |ACTIVE | |
|RH2 |hana08 |30207 |xsengine | 2 | 1 |DC2 |az3n1 | 30207 | 2 |DC1 |YES |SYNCMEM |ACTIVE | |
|RH2 |hana08 |30203 |indexserver | 3 | 1 |DC2 |az3n1 | 30203 | 2 |DC1 |YES |SYNCMEM |ACTIVE | |
status system replication site "3": ACTIVE
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC2
15
The expected results for the return codes are:
- 10: NoHSR
- 11: Error
- 12: Unknown
- 13: Initializing
- 14: Syncing
- 15: Active
In most cases the system replication check returns with return code 15. Another display option is to use -t (printLandscapeTree).
Example for the output on the current primary:
az1n1:rh2adm> python systemReplicationStatus.py -t
HANA System Replication landscape:
DC1 ( primary )
| --- DC3 ( syncmem )
| --- DC2 ( syncmem )
Example of hdbnsutil -sr_state:
[root@az1n1]# su - rh2adm
az1n1:rh2adm> watch -n 10 hdbnsutil -sr_state
Every 10.0s: hdbnsutil -sr_state az1n1: Thu Jun 22 08:42:00 2023
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~
online: true
mode: syncmem
operation mode: logreplay
site id: 2
site name: DC1
is source system: false
is secondary/consumer system: true
has secondaries/consumers attached: false
is a takeover active: false
is primary suspended: false
is timetravel enabled: false
replay mode: auto
active primary site: 1
primary masters: az2n1
Host Mappings:
~~~~~~~~~~~~~~
az1n1 -> [DC3] az3n1
az1n1 -> [DC1] az1n1
az1n1 -> [DC2] az2n1
Site Mappings:
~~~~~~~~~~~~~~
DC2 (primary/primary)
|---DC3 (syncmem/logreplay)
|---DC1 (syncmem/logreplay)
Tier of DC2: 1
Tier of DC3: 2
Tier of DC1: 2
Replication mode of DC2: primary
[2] 0:ssh*
Example of sr_stateConfiguation on the primary:
az1n1:rh2adm> hdbnsutil -sr_stateConfiguration
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~
mode: primary
site id: 2
site name: DC1
done.
Example of sr_stateConfiguration on the secondary:
az1n1:rh2adm> hdbnsutil -sr_stateConfiguration
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~
mode: syncmem
site id: 1
site name: DC2
active primary site: 2
primary masters: az1n1
done.
You can also check in the secondary database which node is the current primary. During the failover it happens to have two primary databases and this information is needed to decide which potential primary database is wrong and needs to be re-registered as secondary.
6.1.6. Registering secondary node Link kopierenLink in die Zwischenablage kopiert!
Preconditions to register a secondary database for a SAP HANA system replication environment:
- Creating SAP HANA backup
- Enabling SAP HANA system replication on the primary node
- Copying database keys
- Register Secondary Node
Registration example:
az1n1:rh2adm> hdbnsutil -sr_register --remoteHost=az2n1 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC1 --online
--operationMode not set; using default from global.ini/[system_replication]/operation_mode: logreplay
adding site ...
collecting information ...
updating local ini files ...
done.
With the registration, the global.ini file automatically updates
… from:
# global.ini last modified 2023-06-15 09:55:05.665341 by /usr/sap/RH2/HDB02/exe/hdbnsutil -initTopology --workergroup=default --set_user_system_pw
[multidb]
mode = multidb
database_isolation = low
singletenant = yes
[persistence]
basepath_datavolumes = /hana/data/RH2
basepath_logvolumes = /hana/log/RH2
… to:
# global.ini last modified 2023-06-15 11:25:44.516946 by hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=02 --replicationMode=syncmem --name=DC1 --online
[multidb]
mode = multidb
database_isolation = low
singletenant = yes
[persistence]
basepath_datavolumes = /hana/data/RH2
basepath_logvolumes = /hana/log/RH2
[system_replication]
timetravel_logreplay_mode = auto
site_id = 3
mode = syncmem
actual_mode = syncmem
site_name = DC1
operation_mode = logreplay
[system_replication_site_masters]
1 = az2n1:30201
6.1.7. sapcontrol GetProcessList Link kopierenLink in die Zwischenablage kopiert!
Checking the processes of an active SAP HANA database
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList az1n1: Wed Jun 7 08:23:03 2023
07.06.2023 08:23:03
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2023 06 02 16:59:42, 111:23:21, 4245
hdbcompileserver, HDB Compileserver, GREEN, Running, 2023 06 02 17:01:35, 111:21:28, 7888
hdbindexserver, HDB Indexserver-RH2, GREEN, Running, 2023 06 02 17:01:36, 111:21:27, 7941
hdbnameserver, HDB Nameserver, GREEN, Running, 2023 06 02 17:01:29, 111:21:34, 7594
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2023 06 02 17:01:35, 111:21:28, 7891
hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2023 06 02 17:01:42, 111:21:21, 8339
hdbxsengine, HDB XSEngine-RH2, GREEN, Running, 2023 06 02 17:01:36, 111:21:27, 7944
6.1.8. sapcontrol GetInstanceList Link kopierenLink in die Zwischenablage kopiert!
This lists the status of instances of a SAP HANA database. It also shows the ports. There are three different status names:
- GREEN (running)
- GRAY (stopped)
- YELLOW ( status is currently changing)
Example of an active instance:
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList az1n1: Wed Jun 7 08:24:13 2023
07.06.2023 08:24:13
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
az1n1, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GRAY
az1n2, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GRAY
Example of a stopped instance:
az1n1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList
22.06.2023 09:14:55
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
az3n1, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GRAY
6.1.9. hdbcons examples Link kopierenLink in die Zwischenablage kopiert!
You can also use the HDB Console to display information about the database:
-
hdbcons -e hdbindexserver 'replication info' -
hdbcons -e hdbindexserver helpfor more options
Example of ‘replication info’:
az1n1:rh2adm> hdbcons -e hdbindexserver 'replication info'
hdbcons -p `pgrep hdbindex` 'replication info'
SAP HANA DB Management Client Console (type '\?' to get help for client commands)
Try to open connection to server process with PID 451925
SAP HANA DB Management Server Console (type 'help' to get help for server commands)
Executable: hdbindexserver (PID: 451925)
[OK]
--
## Start command at: 2023-06-22 09:05:25.211
listing default statistics for volume 3
System Replication Primary Information
======================================
System Replication Primary Configuration
[system_replication] logshipping_timeout = 30
[system_replication] enable_full_sync = false
[system_replication] preload_column_tables = true
[system_replication] ensure_backup_history = true
[system_replication_communication] enable_ssl = off
[system_replication] keep_old_style_alert = false
[system_replication] enable_log_retention = auto
[system_replication] logshipping_max_retention_size = 1048576
[system_replication] logshipping_async_buffer_size = 268435456
- lastLogPos : 0x4ab2700
- lastLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- lastConfirmedLogPos : 0x4ab2700
- lastConfirmedLogPosTimestamp: 22.06.2023-07.05.25 (1687417525193952)
- lastSavepointVersion : 1286
- lastSavepointLogPos : 0x4ab0602
- lastSavepointTimestamp : 22.06.2023-07.02.42 (1687417362853007)
2 session registered.
Session index 0
- SiteID : 3
- RemoteHost : 192.168.5.137
Log Connection
- ptr : 0x00007ff04c0a1000
- channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671686293528, fd=70, refCnt=2, idx=5, local=192.168.5.134/40203_tcp, remote=192.168.5.137/40406_tcp, state=Connected, pending=[r---]}}}
- SSLActive : false
- mode : syncmem
Data Connection
- ptr : 0x00007ff08b730000
- channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671436247064, fd=68, refCnt=2, idx=6, local=192.168.5.134/40203_tcp, remote=192.168.5.137/40408_tcp, state=Connected, pending=[r---]}}}
- SSLActive : false
Primary Statistics
- Creation Timestamp : 20.06.2023-13.55.07 (1687269307772532)
- Last Reset Timestamp : 20.06.2023-13.55.07 (1687269307772532)
- Statistic Reset Count : 0
- ReplicationMode : syncmem
- OperationMode : logreplay
- ReplicationStatus : ReplicationStatus_Active
- ReplicationStatusDetails :
- ReplicationFullSync : DISABLED
- shippedLogPos : 0x4ab2700
- shippedLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- sentLogPos : 0x4ab2700
- sentLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- sentMaxLogWriteEndPosition : 0x4ab2700
- sentMaxLogWriteEndPositionReqCnt: 0x1f6b8
- shippedLogBuffersCount : 142439
- shippedLogBuffersSize : 805855232 bytes
- shippedLogBuffersSizeUsed : 449305792 bytes (55.76az1n1:rh2adm>)
- shippedLogBuffersSizeNet : 449013696 bytes (55.72az1n1:rh2adm>)
- shippedLogBufferDuration : 83898615 microseconds
- shippedLogBufferDurationMin : 152 microseconds
- shippedLogBufferDurationMax : 18879 microseconds
- shippedLogBufferDurationSend : 7301067 microseconds
- shippedLogBufferDurationComp : 0 microseconds
- shippedLogBufferThroughput : 9709099.18 bytes/s
- shippedLogBufferPendingDuration : 80583785 microseconds
- shippedLogBufferRealThrougput : 10073190.40 bytes/s
- replayLogPos : 0x4ab2700
- replayLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- replayBacklog : 0 microseconds
- replayBacklogSize : 0 bytes
- replayBacklogMax : 822130896 microseconds
- replayBacklogSizeMax : 49455104 bytes
- shippedSavepointVersion : 0
- shippedSavepointLogPos : 0x0
- shippedSavepointTimestamp : not set
- shippedFullBackupCount : 0
- shippedFullBackupSize : 0 bytes
- shippedFullBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedFullBackupDuration : 0 microseconds
- shippedFullBackupDurationComp : 0 microseconds
- shippedFullBackupThroughput : 0.00 bytes/s
- shippedFullBackupStreamCount : 0
- shippedFullBackupResumeCount : 0
- shippedLastFullBackupSize : 0 bytes
- shippedLastFullBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedLastFullBackupStart : not set
- shippedLastFullBackupEnd : not set
- shippedLastFullBackupDuration : 0 microseconds
- shippedLastFullBackupStreamCount : 0
- shippedLastFullBackupResumeCount : 0
- shippedDeltaBackupCount : 0
- shippedDeltaBackupSize : 0 bytes
- shippedDeltaBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedDeltaBackupDuration : 0 microseconds
- shippedDeltaBackupDurationComp : 0 microseconds
- shippedDeltaBackupThroughput : 0.00 bytes/s
- shippedDeltaBackupStreamCount : 0
- shippedDeltaBackupResumeCount : 0
- shippedLastDeltaBackupSize : 0 bytes
- shippedLastDeltaBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedLastDeltaBackupStart : not set
- shippedLastDeltaBackupEnd : not set
- shippedLastDeltaBackupDuration : 0 microseconds
- shippedLastDeltaBackupStreamCount : 0
- shippedLastDeltaBackupResumeCount : 0
- currentTransferType : None
- currentTransferSize : 0 bytes
- currentTransferPosition : 0 bytes (0az1n1:rh2adm>)
- currentTransferStartTime : not set
- currentTransferThroughput : 0.00 MB/s
- currentTransferStreamCount : 0
- currentTransferResumeCount : 0
- currentTransferResumeStartTime : not set
- Secondary sync'ed via Log Count : 1
- syncLogCount : 3
- syncLogSize : 62840832 bytes
- backupHistoryComplete : 1
- backupLogPosition : 0x4a99980
- backupLogPositionUpdTimestamp : 22.06.2023-06.56.27 (0x5feb26227e7af)
- shippedMissingLogCount : 0
- shippedMissingLogSize : 0 bytes
- backlogSize : 0 bytes
- backlogTime : 0 microseconds
- backlogSizeMax : 0 bytes
- backlogTimeMax : 0 microseconds
- Secondary Log Connect time : 20.06.2023-13.55.31 (1687269331361049)
- Secondary Data Connect time : 20.06.2023-13.55.33 (1687269333768341)
- Secondary Log Close time : not set
- Secondary Data Close time : 20.06.2023-13.55.31 (1687269331290050)
- Secondary Log Reconnect Count : 0
- Secondary Log Failover Count : 0
- Secondary Data Reconnect Count : 1
- Secondary Data Failover Count : 0
----------------------------------------------------------------
Session index 1
- SiteID : 2
- RemoteHost : 192.168.5.133
Log Connection
- ptr : 0x00007ff0963e4000
- channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671506282520, fd=74, refCnt=2, idx=0, local=192.168.5.134/40203_tcp, remote=192.168.5.133/40404_tcp, state=Connected, pending=[r---]}}}
- SSLActive : false
- mode : syncmem
Data Connection
- ptr : 0x00007ff072c04000
- channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671463146520, fd=75, refCnt=2, idx=1, local=192.168.5.134/40203_tcp, remote=192.168.5.133/40406_tcp, state=Connected, pending=[r---]}}}
- SSLActive : false
Primary Statistics
- Creation Timestamp : 20.06.2023-13.55.49 (1687269349892111)
- Last Reset Timestamp : 20.06.2023-13.55.49 (1687269349892111)
- Statistic Reset Count : 0
- ReplicationMode : syncmem
- OperationMode : logreplay
- ReplicationStatus : ReplicationStatus_Active
- ReplicationStatusDetails :
- ReplicationFullSync : DISABLED
- shippedLogPos : 0x4ab2700
- shippedLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- sentLogPos : 0x4ab2700
- sentLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- sentMaxLogWriteEndPosition : 0x4ab2700
- sentMaxLogWriteEndPositionReqCnt: 0x1f377
- shippedLogBuffersCount : 142326
- shippedLogBuffersSize : 793939968 bytes
- shippedLogBuffersSizeUsed : 437675200 bytes (55.13az1n1:rh2adm>)
- shippedLogBuffersSizeNet : 437565760 bytes (55.11az1n1:rh2adm>)
- shippedLogBufferDuration : 76954026 microseconds
- shippedLogBufferDurationMin : 115 microseconds
- shippedLogBufferDurationMax : 19285 microseconds
- shippedLogBufferDurationSend : 2951495 microseconds
- shippedLogBufferDurationComp : 0 microseconds
- shippedLogBufferThroughput : 10446578.53 bytes/s
- shippedLogBufferPendingDuration : 73848247 microseconds
- shippedLogBufferRealThrougput : 10875889.97 bytes/s
- replayLogPos : 0x4ab2700
- replayLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952)
- replayBacklog : 0 microseconds
- replayBacklogSize : 0 bytes
- replayBacklogMax : 113119944 microseconds
- replayBacklogSizeMax : 30171136 bytes
- shippedSavepointVersion : 0
- shippedSavepointLogPos : 0x0
- shippedSavepointTimestamp : not set
- shippedFullBackupCount : 0
- shippedFullBackupSize : 0 bytes
- shippedFullBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedFullBackupDuration : 0 microseconds
- shippedFullBackupDurationComp : 0 microseconds
- shippedFullBackupThroughput : 0.00 bytes/s
- shippedFullBackupStreamCount : 0
- shippedFullBackupResumeCount : 0
- shippedLastFullBackupSize : 0 bytes
- shippedLastFullBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedLastFullBackupStart : not set
- shippedLastFullBackupEnd : not set
- shippedLastFullBackupDuration : 0 microseconds
- shippedLastFullBackupStreamCount : 0
- shippedLastFullBackupResumeCount : 0
- shippedDeltaBackupCount : 0
- shippedDeltaBackupSize : 0 bytes
- shippedDeltaBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedDeltaBackupDuration : 0 microseconds
- shippedDeltaBackupDurationComp : 0 microseconds
- shippedDeltaBackupThroughput : 0.00 bytes/s
- shippedDeltaBackupStreamCount : 0
- shippedDeltaBackupResumeCount : 0
- shippedLastDeltaBackupSize : 0 bytes
- shippedLastDeltaBackupSizeNet : 0 bytes (-nanaz1n1:rh2adm>)
- shippedLastDeltaBackupStart : not set
- shippedLastDeltaBackupEnd : not set
- shippedLastDeltaBackupDuration : 0 microseconds
- shippedLastDeltaBackupStreamCount : 0
- shippedLastDeltaBackupResumeCount : 0
- currentTransferType : None
- currentTransferSize : 0 bytes
- currentTransferPosition : 0 bytes (0az1n1:rh2adm>)
- currentTransferStartTime : not set
- currentTransferThroughput : 0.00 MB/s
- currentTransferStreamCount : 0
- currentTransferResumeCount : 0
- currentTransferResumeStartTime : not set
- Secondary sync'ed via Log Count : 1
- syncLogCount : 3
- syncLogSize : 61341696 bytes
- backupHistoryComplete : 1
- backupLogPosition : 0x4a99980
- backupLogPositionUpdTimestamp : 22.06.2023-06.56.27 (0x5feb26227e670)
- shippedMissingLogCount : 0
- shippedMissingLogSize : 0 bytes
- backlogSize : 0 bytes
- backlogTime : 0 microseconds
- backlogSizeMax : 0 bytes
- backlogTimeMax : 0 microseconds
- Secondary Log Connect time : 20.06.2023-13.56.21 (1687269381053599)
- Secondary Data Connect time : 20.06.2023-13.56.27 (1687269387399610)
- Secondary Log Close time : not set
- Secondary Data Close time : 20.06.2023-13.56.21 (1687269381017244)
- Secondary Log Reconnect Count : 0
- Secondary Log Failover Count : 0
- Secondary Data Reconnect Count : 1
- Secondary Data Failover Count : 0
----------------------------------------------------------------
[OK]
## Finish command at: 2023-06-22 09:05:25.212 command took: 572.000 usec
--
[EXIT]
--
[BYE]
Example of help:
az1n1:rh2adm> hdbcons -e hdbindexserver help
SAP HANA DB Management Client Console (type '\?' to get help for client commands)
Try to open connection to server process with PID 451925
SAP HANA DB Management Server Console (type 'help' to get help for server commands)
Executable: hdbindexserver (PID: 451925)
[OK]
--
## Start command at: 2023-06-22 09:07:16.784
Synopsis:
help [<command name>]: Print command help
- <command name> - Command name for which to display help
Available commands:
ae_tableload - Handle loading of column store tables and columns
all - Print help and other info for all hdbcons commands
authentication - Authentication management.
binarysemaphore - BinarySemaphore management
bye - Exit console client
cd - ContainerDirectory management
cfgreg - Basis Configurator
checktopic - CheckTopic management
cnd - ContainerNameDirectory management
conditionalvariable - ConditionalVariable management
connection - Connection management
context - Execution context management (i.e., threads)
converter - Converter management
cpuresctrl - Manage cpu resources such as last-level cache allocation
crash - Crash management
crypto - Cryptography management (SSL/SAML/X509/Encryption).
csaccessor - Display diagnostics related to the CSAccessor library
ddlcontextstore - Get DdlContextStore information
deadlockdetector - Deadlock detector.
debug - Debug management
distribute - Handling distributed systems
dvol - DataVolume management
ELF - ELF symbol resolution management
encryption - Persistence encryption management
eslog - Manipulate logger on extended storage
event - Event management
exit - Exit console client
flightrecorder - Flight Recorder
hananet - HANA-Net command interface
help - Display help for a command or command list
hkt - HANA Kernal Tracer (HKT) management
indexmanager - Get IndexManager information, especially for IndexHandles
itab - Internaltable diagnostics
jexec - Information and actions for Job Executor/Scheduler
licensing - Licensing management.
log - Show information about logger and manipulate logger
machine - Information about the machine topology
mm - Memory management
monitor - Monitor view command
mproxy - Malloc proxy management
msl - Mid size LOB management
mutex - Mutex management
numa - Provides NUMA statistics for all columns of a given table, broken down by column constituents like dictionary, data vector and index.
nvmprovider - NVM Provider
output - Command for managing output from the hdbcons
page - Page management
pageaccess - PageAccess management
profiler - Profiler
quit - Exit console client
readwritelock - ReadWriteLock management
replication - Monitor data and log replication
resman - ResourceManager management
rowstore - Row Store
runtimedump - Generate a runtime dump.
savepoint - Savepoint management
semaphore - Semaphore management
servicethreads - Thread information M_SERVICE_THREADS
snapshot - Snapshot management
stat - Statistics management
statisticsservercontroller - StatisticsServer internals
statreg - Statistics registry command
syncprimi - Syncprimitive management (Mutex, CondVariable, Semaphore, BinarySemaphore,
ReadWriteLock)
table - Table Management
tablepreload - Manage and monitor table preload
trace - Trace management
tracetopic - TraceTopic management
transaction - Transaction management
ut - UnifiedTable Management
version - Version management
vf - VirtualFile management
x2 - get X2 info
[OK]
## Finish command at: 2023-06-22 09:07:16.785 command took: 209.000 usec
--
[EXIT]
--
[BYE]
6.1.10. Creating SAP HANA backup Link kopierenLink in die Zwischenablage kopiert!
If you want to use SAP HANA system replication, a backup must first be created on the primary system.
Example of how to perform this is as user <sid>adm:
az1n1:rh2adm> hdbsql -i ${TINSTANCE} -u system -d SYSTEMDB "BACKUP DATA USING FILE ('/hana/backup/')"
az1n1:rh2adm> hdbsql -i ${TINSTANCE} -u system -d ${SAPSYSTEMNAME} "BACKUP DATA USING FILE ('/hana/backup/')"
6.1.11. Enabling SAP HANA system replication on the primary database Link kopierenLink in die Zwischenablage kopiert!
SAP HANA system replication has to be enabled on the primary node. This requires a backup to be done first.
az1n1:rh2adm> hdbnsutil -sr_enable --name=DC1
nameserver is active, proceeding ...
successfully enabled system as system replication source site
done.
6.1.12. Copying database keys to the secondary nodes Link kopierenLink in die Zwischenablage kopiert!
The database keys need to be copied from the primary to the secondary database before it can be registered as a secondary.
For example:
az1n1:rh2adm> scp -rp /usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_${SAPSYSTEMNAME}.DAT az3n1:/usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_${SAPSYSTEMNAME}.DAT
az1n1:rh2adm> scp -rp /usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_${SAPSYSTEMNAME}.KEY az3n1:/usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_${SAPSYSTEMNAME}.KEY
6.1.13. Registering a secondary node for SAP HANA system replication Link kopierenLink in die Zwischenablage kopiert!
You must ensure that the database keys have been copied to the secondary nodes first. Then run the registration command:
az1n1:rh2adm> hdbnsutil -sr_register --remoteHost=az3n1 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC1 --remoteName=DC3 --operationMode=logreplay --online
Parameter description:
-
remoteHost: hostname of the active node running the source (primary) database -
remoteInstance: the instance number of the database replicationMode: one of the following options-
sync: hard disk synchronization -
async: asynchronous replication -
syncmem: memory synchronization
-
-
name: this is an alias for this replication site -
remoteName: alias name of the source database operationMode: one of the following options-
delta_datashipping: data is periodically transmitted. Takeovers take a little bit longer. -
logreplay: logs are redone immediately on the remote site. Takeover is faster. -
logreplay_readaccess: additional logreplay read-only access to the second site is possible.
-
6.1.14. Checking the log_mode of the SAP HANA database Link kopierenLink in die Zwischenablage kopiert!
There are two options for setting the log_mode:
-
log_mode=overwrite -
log_mode=normal: This is the default value and is also required when the database instance is running as primary. Using SAP HANA multitarget system replication, you have to uselog_mode=normal. Usehdbsqlto check thelog_modein the best way:
Example including a wrong overwrite entry:
az1n1:rh2adm> hdbsql -i ${TINSTANCE} -d ${SAPSYSTEMNAME} -u system
Password:
Welcome to the SAP HANA Database interactive terminal.
Type: \h for help with commands
\q to quit
hdbsql RH2=> select * from m_inifile_contents where key='log_mode'
FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE
"global.ini","DEFAULT","","","persistence","log_mode","normal"
"global.ini","HOST","","node2","persistence","log_mode","overwrite"
2 rows selected (overall time 46.931 msec; server time 30.845 msec)
hdbsql RH2=>exit
In this case, we have two global.ini files:
DEFAULT-
/usr/sap/${SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini
-
HOST-
/hana/shared/${SAPSYSTEMNAME}/HDB${TINSTANCE}/${HOSTNAME}/global.iniTheHOSTvalues overwrite theDEFAULTvalues. You can also check both files before the database starts and then usehdbsqlagain to verify the right settings. Edit theglobal.inifile to change thelog_mode.
-
Example:
az1n1:rh2adm> vi /hana/shared/${SAPSYSTEMNAME}/HDB${TINSTANCE}/${HOSTNAME}/global.ini
# global.ini last modified 2023-04-06 16:15:03.521715 by hdbnameserver
[persistence]
log_mode = overwrite
# global.ini last modified 2023-04-06 16:15:03.521715 by hdbnameserver
[persistence]
log_mode = normal
After having checked or updated the global.ini file(s), verify the log_mode values:
az1n1:rh2adm> hdbsql -d ${SAPSYSTEMNAME} -i ${TINSTANCE} -u SYSTEM;
hdbsql RH2=> select * from m_inifile_contents where section='persistence' and key='log_mode'
FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE
"global.ini","DEFAULT","","","persistence","log_mode","normal"
"global.ini","HOST","","node2","persistence","log_mode","normal"
2 rows selected (overall time 60.982 msec; server time 20.420 msec)
The section also shows that this parameter needs to be set in the [persistence] section. When you change the log mode from overwrite to normal, it is recommended that you create a full data backup to ensure that the database can be recovered.
6.1.15. Discovering primary database Link kopierenLink in die Zwischenablage kopiert!
There are several ways to identify the primary node, for instance:
-
pcs status | grep Promoted -
hdbnsutil -sr_stateConfiguration -
python systemReplicationStatus.py
Option 1 - The following example of the systemReplicationStatus.py script and filter returns the primary database location on all nodes:
az1n1:rh2adm>
/usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/Python/bin/python
/usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py --sapcontrol=1 | egrep -e
"3${TINSTANCE}01/HOST|PRIMARY_MASTERS"| head -1 | awk -F"=" '{ print $2 }'
Output:
az2n1
Option 2 - The following example displays the systemReplicationStatus in a similar way for all nodes:
rh2adm>hdbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Output:
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
6.1.16. Taking over primary node Link kopierenLink in die Zwischenablage kopiert!
Refer to xref:con_check_replication_status_configuring-hana-scale-out-multitarget-system-replication-disaster-recovery[Checking the replication status] section for check on the primary and the secondary nodes. Also:
-
Putting cluster into
maintenance-mode - Initiate the takeover on the secondary node
Example for enabling maintenance-mode for the cluster:
[root@az1n1]# pcs property set maintenance-mode=true
On the secondary that is to become the new primary, run as <sidadm> user:
az1n1:rh2adm> hdbnsutil -sr_takeover
This secondary becomes the primary, other active secondary databases get re-registered to the new primary and the old primary needs to be manually re-registered as secondary.
6.1.17. Re-registering former primary as secondary Link kopierenLink in die Zwischenablage kopiert!
You must ensure that the cluster is stopped or put in maintenance-mode. Example:
az2n1:rh2adm> hdbnsutil -sr_register --remoteHost=az3n1 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC2 --online --remoteName=DC3 --operationMode=logreplay --force_full_replica --online
In our examples, we are using full replication. Your SAP HANA system administrator should know when full replication is required.
6.1.18. Recovering from failover Link kopierenLink in die Zwischenablage kopiert!
Refer to Checking the SAP HANA system replication status and Discovering the primary node. It is important that the information is consistent. If a node is not part of the systemReplicationStatus.py output and has a different system replication state, check with your database administrator if this node needs to be re-registered.
One way of solving this is to re-register this site as a new secondary.
Sometimes a secondary instance still does not come up. Then unregister this site before you re-register it again. Example of unregistering the secondary DC1:
az1n1:rh2adm> hdbnsutil -sr_unregister --name=DC1
Example of re-registering DC1:
az1n1:rh2adm> hdbnsutil -sr_register --name=DC1 --remoteHost=node2 --remoteInstance=02 --replicationMode=sync --operationMode=logreplay --online
You need to start the database and check if it is running. Finally check the replication status.
6.2. Pacemaker commands Link kopierenLink in die Zwischenablage kopiert!
6.2.1. Starting and stopping the cluster Link kopierenLink in die Zwischenablage kopiert!
To start the cluster on all nodes execute the following command:
# pcs cluster start -all
After a reboot, the cluster starts automatically only if the service is enabled. The command helps to know if the cluster has started and if the daemons are enabled to be autostarted.
# pcs cluster status
The cluster auto-start can be enabled with:
# pcs cluster enable --all
Other options are:
- Stop the cluster.
- Put a node into standby.
-
Put the cluster into
maintenance-mode.
For more details, check the pcs cluster help:
# pcs cluster stop --all
# pcs cluster help
6.2.2. Putting the cluster into maintenance-mode Link kopierenLink in die Zwischenablage kopiert!
If you want to make changes and you want to avoid the pacemaker cluster interference, then put the cluster into maintenance-mode to "freeze" the cluster or you can put the SAPHana resource into maintenance-mode:
# pcs property set maintenance-mode=true
An easy way to verify maintenance-mode is to check if the resources are unmanaged. Refresh cluster resources to detect the resource state while the cluster is in maintenance-mode and does not update resource status changes:
# pcs resource refresh
This indicates if anything is not yet correct and is teh cluster causes remediation action, as soon as it is taken out of maintenance-mode.
Run the command to remove the maintenance-mode:
# pcs property set maintenance-mode=false
Now the cluster continues to work. If something is configured wrong, it reacts now.
6.2.3. Checking cluster status Link kopierenLink in die Zwischenablage kopiert!
Following are several ways to check the cluster status:
Check if the cluster is running:
# pcs cluster statusCheck the cluster and all resources:
# pcs statusCheck the cluster, all resources and all node attributes:
# pcs status --fullCheck the resources only:
# pcs resource status --fullCheck
Stonithhistory:# pcs stonith historyCheck location constraints:
# pcs constraint location
Fencing must be configured and tested. In order to obtain a solution that is as automated as possible, the cluster must be constantly activated, which then enables the cluster to automatically start after a reboot. In a production environment, disabling the restart allows manual intervention, for instance after a crash. You must also check the daemon status.
Example:
# pcs status --full
Cluster name: cluster1
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-06-22 17:56:01 +02:00)
Cluster Summary:
* Stack: corosync
* Current DC: az2n1 (2) (version 2.1.5-7.el9-a3f44794f94) - partition with quorum
* Last updated: Thu Jun 22 17:56:01 2023
* Last change: Thu Jun 22 17:53:34 2023 by root via crm_attribute on az1n1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Node az1n1 (1): online, feature set 3.16.2
* Node az2n1 (2): online, feature set 3.16.2
Full List of Resources:
* h7fence (stonith:fence_rhevm): Started az2n1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf:heartbeat:SAPHanaTopology): Started az1n1
* SAPHanaTopology_RH2_02 (ocf:heartbeat:SAPHanaTopology): Started az2n1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf:heartbeat:SAPHana): Promoted az1n1
* SAPHana_RH2_02 (ocf:heartbeat:SAPHana): Unpromoted az2n1
* vip_RH2_02_MASTER (ocf:heartbeat:IPaddr2): Started az1n1
Node Attributes:
* Node: az1n1 (1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : az2n1
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.059.02
* hana_rh2_vhost : az1n1
* lpa_rh2_lpt : 1687449214
* master-SAPHana_RH2_02 : 150
* Node: az2n1 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : az1n1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.059.02
* hana_rh2_vhost : az2n1
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
az1n1: Online
az2n1: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
6.2.4. Checking resource states Link kopierenLink in die Zwischenablage kopiert!
Use pcs resource to check the status of all resources. This prints the list and the current status of the resources.
Example:
# pcs resource
* rsc_ip_MASTER1 (ocf:heartbeat:IPaddr2): Started az3n1
* rsc_ip_SLAVE1 (ocf:heartbeat:IPaddr2): Started az3n1
* Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10]:
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n1 (Monitoring)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n1 (Monitoring)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n2 (Monitoring)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n2 (Monitoring)
* Stopped: [ az3n1 ]
…
6.2.5. Checking resource config Link kopierenLink in die Zwischenablage kopiert!
The following displays the current resource configuration:
# pcs resource config
Resource: rsc_ip_MASTER1 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: rsc_ip_MASTER1-instance_attributes
ip=192.168.10.120
Operations:
monitor: rsc_ip_MASTER1-monitor-interval-10s
interval=10s timeout=20s\n
start: rsc_ip_MASTER1-start-interval-0s
interval=0s timeout=20s
stop: rsc_ip_MASTER1-stop-interval-0s
interval=0s timeout=20s
Resource: rsc_ip_SLAVE1 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: rsc_ip_SLAVE1-instance_attributes
ip=192.168.10.130
Meta Attributes: rsc_ip_SLAVE1-meta_attributes
is-enabled=false
Operations:
monitor: rsc_ip_SLAVE1-monitor-interval-10s
interval=10s timeout=20s
start: rsc_ip_SLAVE1-start-interval-0s
interval=0s timeout=20s
stop: rsc_ip_SLAVE1-stop-interval-0s
interval=0s timeout=20s
Clone: rsc_SAPHanaTopology_RH1_10-clone
Meta Attributes: rsc_SAPHanaTopology_RH1_10-clone-meta_attributes
clone-max=4
clone-node-max=1
interleave=true
Resource: rsc_SAPHanaTopology_RH1_10 (class=ocf provider=heartbeat type=SAPHanaTopology)
Attributes: rsc_SAPHanaTopology_RH1_10-instance_attributes
InstanceNumber=10
SID=RH1
Operations:
methods: rsc_SAPHanaTopology_RH1_10-methods-interval-0s
interval=0s timeout=5
monitor: rsc_SAPHanaTopology_RH1_10-monitor-interval-10
interval=10 timeout=600
reload: rsc_SAPHanaTopology_RH1_10-reload-interval-0s
interval=0s timeout=5
start: rsc_SAPHanaTopology_RH1_10-start-interval-0s
interval=0s timeout=600
stop: rsc_SAPHanaTopology_RH1_10-stop-interval-0s
interval=0s timeout=300
Clone: rsc_SAPHanaController_RH1_10-clone
Meta Attributes: rsc_SAPHanaController_RH1_10-clone-meta_attributes
clone-max=4
clone-node-max=1
interleave=true
promotable=true
Resource: rsc_SAPHanaController_RH1_10 (class=ocf provider=heartbeat type=SAPHanaController)
Attributes: rsc_SAPHanaController_RH1_10-instance_attributes
AUTOMATED_REGISTER=true
DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=10
PREFER_SITE_TAKEOVER=true
SID=RH1
Meta Attributes: rsc_SAPHanaController_RH1_10-meta_attributes
priority=100
Operations:
demote: rsc_SAPHanaController_RH1_10-demote-interval-0s
interval=0s timeout=320
methods: rsc_SAPHanaController_RH1_10-methods-interval-0s
interval=0s timeout=5
monitor: rsc_SAPHanaController_RH1_10-monitor-interval-59
interval=59 timeout=700 role=Promoted
monitor: rsc_SAPHanaController_RH1_10-monitor-interval-61
interval=61 timeout=700 role=Unpromoted
promote: rsc_SAPHanaController_RH1_10-promote-interval-0s
interval=0s timeout=900
reload: rsc_SAPHanaController_RH1_10-reload-interval-0s
interval=0s timeout=5
start: rsc_SAPHanaController_RH1_10-start-interval-0s
interval=0s timeout=3600
stop: rsc_SAPHanaController_RH1_10-stop-interval-0s
interval=0s timeout=3600
This lists all the parameters which are used to configure the installed and configured resource agent.
6.2.6. SAPHana resource option AUTOMATED_REGISTER=true Link kopierenLink in die Zwischenablage kopiert!
If this option is used in the SAPHana resource, pacemaker automatically re-registers the secondary database.
It is recommended to use this option for the first tests. If you use AUTOMATED_REGISTER=false, the administrator needs to re-register the secondary node manually.
6.2.7. Handling resources Link kopierenLink in die Zwischenablage kopiert!
There are several options for managing resources. For more information, check out the help available:
# pcs resource help
List the used resource agents:
# pcs resource config | grep "type=" | awk -F"type=" '{ print $2 }' | sed -e "s/)//g"
Example output:
IPaddr2
SAPHanaTopology
SAPHanaController
Displaying specific resource agent description and configuration parameters:
# pcs resource describe <resource agent>
Example (without output):
# pcs resource describe IPaddr2
Example of resource agent IPaddr2 (with output):
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')
ocf:heartbeat:IPaddr2 - Manages virtual IPv4 and IPv6 addresses (Linux specific version)
This Linux-specific resource manages IP alias IP addresses. It can add an IP alias, or remove one. In
addition, it can implement Cluster Alias IP functionality if invoked as a clone resource. If used as a
clone, "shared address with a trivial, stateless (autonomous) load-balancing/mutual exclusion on
ingress" mode gets applied (as opposed to "assume resource uniqueness" mode otherwise). For that, Linux
firewall (kernel and userspace) is assumed, and since recent distributions are ambivalent in plain
"iptables" command to particular back-end resolution, "iptables-legacy" (when present) gets prioritized
so as to avoid incompatibilities (note that respective ipt_CLUSTERIP firewall extension in use here is,
at the same time, marked deprecated, yet said "legacy" layer can make it workable, literally, to this
day) with "netfilter" one (as in "iptables-nft"). In that case, you should explicitly set clone-node-max
>= 2, and/or clone-max < number of nodes. In case of node failure, clone instances need to be re-
allocated on surviving nodes. This would not be possible if there is already an instance on those nodes,
and clone-node-max=1 (which is the default). When the specified IP address gets assigned to a
respective interface, the resource agent sends unsolicited ARP (Address Resolution Protocol, IPv4) or NA
(Neighbor Advertisement, IPv6) packets to inform neighboring machines about the change. This
functionality is controlled for both IPv4 and IPv6 by shared 'arp_*' parameters.
Resource options:
ip (required) (unique): The IPv4 (dotted quad notation) or IPv6 address (colon hexadecimal notation)
example IPv4 "192.168.1.1". example IPv6 "2001:db8:DC28:0:0:FC57:D4C8:1FFF".
nic: The base network interface on which the IP address will be brought online. If left empty, the
script will try and determine this from the routing table. Do NOT specify an alias interface in
the form eth0:1 or anything here; rather, specify the base interface only. If you want a label,
see the iflabel parameter. Prerequisite: There must be at least one static IP address, which is
not managed by the cluster, assigned to the network interface. If you can not assign any static IP
address on the interface, modify this kernel parameter: sysctl -w
net.ipv4.conf.all.promote_secondaries=1 # (or per device)
cidr_netmask: The netmask for the interface in CIDR format (e.g., 24 and not 255.255.255.0) If
unspecified, the script will also try to determine this from the routing table.
broadcast: Broadcast address associated with the IP. It is possible to use the special symbols '+' and
'-' instead of the broadcast address. In this case, the broadcast address is derived by
setting/resetting the host bits of the interface prefix.
iflabel: You can specify an additional label for your IP address here. This label is appended to your
interface name. The kernel allows alphanumeric labels up to a maximum length of 15 characters
including the interface name and colon (e.g. eth0:foobar1234) A label can be specified in nic
parameter but it is deprecated. If a label is specified in nic name, this parameter has no effect.
lvs_support: Enable support for LVS Direct Routing configurations. In case a IP address is stopped,
only move it to the loopback device to allow the local node to continue to service requests, but
no longer advertise it on the network. Notes for IPv6: It is not necessary to enable this option
on IPv6. Instead, enable 'lvs_ipv6_addrlabel' option for LVS-DR usage on IPv6.
lvs_ipv6_addrlabel: Enable adding IPv6 address label so IPv6 traffic originating from the address's
interface does not use this address as the source. This is necessary for LVS-DR health checks to
realservers to work. Without it, the most recently added IPv6 address (probably the address added
by IPaddr2) will be used as the source address for IPv6 traffic from that interface and since that
address exists on loopback on the realservers, the realserver response to pings/connections will
never leave its loopback. See RFC3484 for the detail of the source address selection. See also
'lvs_ipv6_addrlabel_value' parameter.
lvs_ipv6_addrlabel_value: Specify IPv6 address label value used when 'lvs_ipv6_addrlabel' is enabled.
The value should be an unused label in the policy table which is shown by 'ip addrlabel list'
command. You would rarely need to change this parameter.
mac: Set the interface MAC address explicitly. Currently only used in case of the Cluster IP Alias.
Leave empty to chose automatically.
clusterip_hash: Specify the hashing algorithm used for the Cluster IP functionality.
unique_clone_address: If true, add the clone ID to the supplied value of IP to create a unique address
to manage
arp_interval: Specify the interval between unsolicited ARP (IPv4) or NA (IPv6) packets in
milliseconds. This parameter is deprecated and used for the backward compatibility only. It is
effective only for the send_arp binary which is built with libnet, and send_ua for IPv6. It has no
effect for other arp_sender.
arp_count: Number of unsolicited ARP (IPv4) or NA (IPv6) packets to send at resource initialization.
arp_count_refresh: For IPv4, number of unsolicited ARP packets to send during resource monitoring.
Doing so helps mitigate issues of stuck ARP caches resulting from split-brain situations.
arp_bg: Whether or not to send the ARP (IPv4) or NA (IPv6) packets in the background. The default is
true for IPv4 and false for IPv6.
arp_sender: For IPv4, the program to send ARP packets with on start. Available options are: -
send_arp: default - ipoibarping: default for infiniband interfaces if ipoibarping is available -
iputils_arping: use arping in iputils package - libnet_arping: use another variant of arping
based on libnet
send_arp_opts: For IPv4, extra options to pass to the arp_sender program. Available options are vary
depending on which arp_sender is used. A typical use case is specifying '-A' for iputils_arping
to use ARP REPLY instead of ARP REQUEST as Gratuitous ARPs.
flush_routes: Flush the routing table on stop. This is for applications which use the cluster IP
address and which run on the same physical host that the IP address lives on. The Linux kernel may
force that application to take a shortcut to the local loopback interface, instead of the
interface the address is really bound to. Under those circumstances, an application may, somewhat
unexpectedly, continue to use connections for some time even after the IP address is deconfigured.
Set this parameter in order to immediately disable said shortcut when the IP address goes away.
run_arping: For IPv4, whether or not to run arping for collision detection check.
nodad: For IPv6, do not perform Duplicate Address Detection when adding the address.
noprefixroute: Use noprefixroute flag (see 'man ip-address').
preferred_lft: For IPv6, set the preferred lifetime of the IP address. This can be used to ensure that
the created IP address will not be used as a source address for routing. Expects a value as
specified in section 5.5.4 of RFC 4862.
network_namespace: Specifies the network namespace to operate within. The namespace must already
exist, and the interface to be used must be within the namespace.
Default operations:
start:
interval=0s
timeout=20s
stop:
interval=0s
timeout=20s
monitor:
interval=10s
timeout=20s
If the cluster stops, all the resources stops as well; if the cluster is put into maintenance-mode, all resources remain in their current status but are not be monitored or managed.
6.2.8. Handling cluster property for maintenance-mode Link kopierenLink in die Zwischenablage kopiert!
Listing all defined properties:
[root@az1n1] pcs property
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=cluster1
dc-version=2.1.7-5.2.el9_4-0f7f88312
have-watchdog=false
last-lrm-refresh=1747914571
maintenance-mode=true
stonith-enabled=false
10s
stonith-timeout=900
To reconfigure the database, the cluster must be instructed to ignore any changes until the configuration is complete. You can put the cluster into maintenance-mode using:
# pcs property set maintenance-mode=true
Check the maintenance-mode:
* rsc_ip_MASTER1 (ocf:heartbeat:IPaddr2): Started az1n3 (maintenance)
* rsc_ip_SLAVE1 (ocf:heartbeat:IPaddr2): Started az1n3 (maintenance)
* Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n2 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n2 (maintenance)
* Stopped: [ az1n3 ]
* Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n2 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n2 (maintenance)
* Stopped: [ az1n3 ]
* Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n1 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az1n2 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n2 (maintenance)
* Stopped: [ az1n1 az1n3 ]
Verify that all resources are "unmanaged":
[root@az1n1]# pcs status
Cluster name: cluster1
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: az2n1 (version 2.1.7-5.2.el9_4-0f7f88312) - partition with quorum
* Last updated: Fri May 30 18:28:48 2025 on az2n1
* Last change: Fri May 30 18:20:26 2025 by root via root on az2n1
* 5 nodes configured
* 16 resource instances configured
*** Resource management is DISABLED ***
The cluster will not attempt to start, stop or recover services
Node List:
* Online: [ az1n1 az1n2 az3n1 az2n1 az2n2 ]
Full List of Resources:
* R9_fence_out (stonith:fence_rhevm): Started az1n1 (maintenance)
* rsc_ip_MASTER1 (ocf:heartbeat:IPaddr2): Started az3n1 (maintenance)
* rsc_ip_SLAVE1 (ocf:heartbeat:IPaddr2): Started az3n1 (maintenance)
* Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n2 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n2 (maintenance)
* Stopped: [ az3n1 ]
* Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n2 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n2 (maintenance)
* Stopped: [ az3n1 ]
* Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n1 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az1n2 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n2 (maintenance)
* Stopped: [ az1n1 az3n1 ]
Failed Resource Actions:
* rsc_SAPHanaController_RH1_10 start on az1n1 returned 'error' at Fri May 30 17:48:27 2025 after 19.098s
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
The resources will switch back to managed if you unset the maintenance-mode:
# pcs property set maintenance-mode=false
6.2.9. Failing over the SAPHana resource using Move Link kopierenLink in die Zwischenablage kopiert!
A simple example of how to failover the SAP HANA database is to use the pcs resource move command. You need to use the clone resource name and move the resource as shown below:
# pcs resource move <SAPHana-clone-resource>
In this example, the clone resource is SAPHana_RH2_02-clone:
[root@az1n1]# pcs resource
* rsc_ip_MASTER1 (ocf:heartbeat:IPaddr2): Started az3n1 (maintenance)
* rsc_ip_SLAVE1 (ocf:heartbeat:IPaddr2): Started az3n1 (maintenance)
* Clone Set: rsc_SAPHanaTopology_RH1_10-clone [rsc_SAPHanaTopology_RH1_10] (maintenance):
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n1 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az1n2 (maintenance)
* rsc_SAPHanaTopology_RH1_10 (ocf:heartbeat:SAPHanaTopology): Started az2n2 (maintenance)
* Stopped: [ az3n1 ]
* Clone Set: rsc_SAPHanaFilesystem_RH1_10-clone [rsc_SAPHanaFilesystem_RH1_10] (maintenance):
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n1 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az1n2 (maintenance)
* rsc_SAPHanaFilesystem_RH1_10 (ocf:heartbeat:SAPHanaFilesystem): Started az2n2 (maintenance)
* Stopped: [ az3n1 ]
* Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n1 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az1n2 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted az2n2 (maintenance)
* Stopped: [ az1n1 az3n1 ]
Move the resource:
# pcs resource move SAPHana_RH2_02-clone
Location constraint to move resource 'SAPHana_RH2_02-clone' has been created
Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 'SAPHana_RH2_02-clone' has been removed
Waiting for the cluster to apply configuration changes...
resource 'SAPHana_RH2_02-clone' is promoted on node 'az2n1'; unpromoted on node 'az1n1'
Check if there are remaining constraints:
# pcs constraint location
Clear the resource to remove those location constraints created during the failover. Example:
[root@az1n1]# pcs resource clear SAPHana_RH2_02-clone
Check if there are any remaining warnings or entries in the "Migration Summary":
# pcs status --full
Check the stonith history:
# pcs stonith history
If desired, clear the stonith history:
# pcs stonith history cleanup
If you are using a pacemaker version earlier than 2.1.5, refer to Is there a way to manage constraints when running pcs resource move? and check the remaining constraints.
6.2.10. Monitoring failover and sync state Link kopierenLink in die Zwischenablage kopiert!
All pacemaker activities are logged in the /var/log/messages file on the cluster nodes. Since there are many other messages, it is sometimes difficult to read the messages related to the SAP resource agent. You can configure a command alias that filters out only the messages related to SAP resource agent.
Example alias tmsl:
# alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SAPSYSTEMNAME}_HDB${TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT"'
Example output of tsml:
[root@az1n1]# tmsl
Jun 22 13:59:54 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 13:59:55 az1n1 SAPHana(SAPHana_RH2_02)[907482]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:06 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:04:09 az1n1 SAPHana(SAPHana_RH2_02)[914625]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:21 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:23 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:08:24 az1n1 SAPHana(SAPHana_RH2_02)[922136]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:12:35 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:35 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:36 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:38 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: secondary with sync status SOK ==> possible takeover node
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: saphana_monitor_secondary: scoring_crm_master(4:S:master1:master:worker:master,SOK)
Jun 22 14:12:39 az1n1 SAPHana(SAPHana_RH2_02)[929408]: INFO: DEC: scoring_crm_master: sync(SOK) is matching syncPattern (SOK)
Jun 22 14:14:01 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: PROMOTED -> DEMOTED
Jun 22 14:14:02 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: DEMOTED -> UNDEFINED
Jun 22 14:14:19 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az1n1]: DEMOTED -> PROMOTED
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1 is empty or SWAIT. Take polling attribute: hana_rh2_sync_state=SOK
Jun 22 14:14:21 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: Finally get_SRHOOK()=SOK
Jun 22 14:15:14 az1n1 SAPHana(SAPHana_RH2_02)[932762]: INFO: DEC: hana_rh2_site_srHook_DC1=SWAIT
Jun 22 14:15:22 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az1n1]: SOK -> PRIM
Jun 22 14:15:23 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: PRIM -> SOK
Jun 22 14:15:23 az1n1 SAPHana(SAPHana_RH2_02)[934810]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:15:25 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_clone_state[az2n1]: UNDEFINED -> DEMOTED
Jun 22 14:15:32 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: SOK -> SFAIL
Jun 22 14:19:36 az1n1 pacemaker-attrd[10150]: notice: Setting hana_rh2_sync_state[az2n1]: SFAIL -> SOK
Jun 22 14:19:36 az1n1 SAPHana(SAPHana_RH2_02)[942693]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:23:49 az1n1 SAPHana(SAPHana_RH2_02)[950623]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:28:02 az1n1 SAPHana(SAPHana_RH2_02)[958633]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:32:15 az1n1 SAPHana(SAPHana_RH2_02)[966683]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:36:27 az1n1 SAPHana(SAPHana_RH2_02)[974736]: INFO: ACT site=DC1, setting SOK for secondary (1)
Jun 22 14:40:40 az1n1 SAPHana(SAPHana_RH2_02)[982934]: INFO: ACT site=DC1, setting SOK for secondary (1)
The filter makes it easier to understand what status changes are happening. If details are missing, you can open the whole message file to read all the information.
After a failover, you can clear the resource and check that there are no remaining location constraints.
6.2.11. Checking cluster consistency Link kopierenLink in die Zwischenablage kopiert!
During the installation the resources are sometimes started before the configuration is finally completed. This can lead to entries in the Cluster Information Base (CIB), which can result in incorrect behavior. This can easily be checked and also manually corrected after the configuration has been completed.
If you start the SAPHana resources the missing entries recreates. pcs commands does not address wrong entries and you must remove it manually.
Check CIB entries:
# cibadmin --query
DC3 and SFAIL are entries that should not be present in the Cluster Information Base, when the cluster members are DC1 and DC2, and when the sync state between the nodes is reported as SOK.
Example to check for corresponding entries:
# cibadmin --query |grep '"DC3"'
# cibadmin --query |grep '"SFAIL"'
The command can be executed on any node in the cluster as user root. Usually the output of the command is empty. If there is still an error in the configuration the output could look like this:
<nvpair id="SAPHanaSR-hana_rh1_glob_sec" name="hana_rh1_glob_sec" value="DC3"/>
These entries can be removed with the following command:
# cibadmin --delete --xml-text '<...>'
To remove the entries in the example above you have to enter the following. You must note that the output contains double quotes, so the text must be embedded in single quotes:
# cibadmin --delete --xml-text ' <nvpair id="SAPHanaSR-hana_rh1_glob_sec" name="hana_rh1_glob_sec" value="DC3"/>'
Verify the absence of the removed CIB entries. The returned output should be empty.
# cibadmin --query |grep 'DC3"'
6.2.12. Cleaning up cluster Link kopierenLink in die Zwischenablage kopiert!
During the failover tests there might be left behind constraints and other remains from previous tests. The cluster needs to be cleared from these before starting the next test.
Check the cluster status for failure events:
# pcs status --full
If you see cluster warnings or entries in the "Migration Summary", you should clear and cleanup the resources:
# pcs resource clear SAPHana_RH2_02-clone
# pcs resource cleanup SAPHana_RH2_02-clone
Output:
Cleaned up SAPHana_RH2_02:0 on az1n1
Cleaned up SAPHana_RH2_02:1 on az2n1
Check if there are unwanted location constraints, for example from a previous failover:
# pcs constraint location
Check the existing constraints in more detail:
# pcs constraint --full
Example of a location constraint after a resource move:
Node: hana08 (score:-INFINITY) (role:Started) (id:cli-ban-SAPHana_RH2_02-clone-on-hana08)
Clear this location constraint:
# pcs resource clear SAPHana_RH2_02-clone
Verify the constraint is gone from the constraints list. If it persists, explicitly delete it using its constraint id:
# pcs constraint delete cli-ban-SAPHana_RH2_02-clone-on-hana08
If you run several tests with fencing you might also clear the stonith history:
# pcs stonith history cleanup
All pcs commands are executed as user root. For more details check Discovering leftovers.
6.2.13. Other cluster commands Link kopierenLink in die Zwischenablage kopiert!
Various cluster command examples
# pcs status --full
# crm_mon -1Arf # Provides an overview
# pcs resource # Lists all resources and shows if they are running
# pcs constraint --full # Lists all constraint ids which should be removed
# pcs cluster start --all # This will start the cluster on all nodes
# pcs cluster stop --all # This will stop the cluster on all nodes
# pcs node attribute # Lists node attributes
6.2.14. Alternative maintenance procedure Link kopierenLink in die Zwischenablage kopiert!
Instead of putting the whole cluster into maintenance-mode you can also put a single resource into maintenance-mode.
# pcs resource meta rsc_SAPHanaController_RH1_10-clone maintenance=true
# pcs resource # will show the maintenance mode
…
* Clone Set: rsc_SAPHanaController_RH1_10-clone [rsc_SAPHanaController_RH1_10] (promotable, maintenance):
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted ndc1hana02 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted ndc2hana01 (maintenance)
* rsc_SAPHanaController_RH1_10 (ocf:heartbeat:SAPHanaController): Unpromoted ndc2hana02 (maintenance)
…
To leave the maintenance mode you can enter:
# pcs resource meta rsc_SAPHanaController_RH1_10-clone maintenance=false
It is also very important to refresh the resources after you leave the maintenance mode.
# pcs resource refresh
# crm_mon -1Arf # Provides an overview
# pcs resource #
[source,text]
6.3. RHEL and general commands Link kopierenLink in die Zwischenablage kopiert!
6.3.1. Discovering current status Link kopierenLink in die Zwischenablage kopiert!
You have to follow several steps to know what the current status of the environment is. Refer to Monitoring the environment. Also, we recommend to do the following:
-
Check
/var/log/messages, use Aliases for monitoring for easier log reviews. - Sometimes a cluster must be cleaned up from previous activity to continue proper operation. Discovering leftovers and clear them if necessary.
6.3.2. yum info Link kopierenLink in die Zwischenablage kopiert!
# yum info resource-agents-sap-hana
Last metadata expiration check: 2:47:28 ago on Tue 06 Jun 2023 03:13:57 AM CEST.
Installed Packages
Name : resource-agents-sap-hana
Epoch : 1
Version : 0.162.1
Release : 2.el9_2
Architecture : noarch
Size : 174 k
Source : resource-agents-sap-hana-0.162.1-2.el9_2.src.rpm
Repository : @System
Summary : SAP HANA cluster resource agents
URL : https://github.com/SUSE/SAPHanaSR
License : GPLv2+
Description : The SAP HANA resource agents interface with Pacemaker to allow
: SAP instances to be managed in a cluster environment.
6.3.3. RPM display version Link kopierenLink in die Zwischenablage kopiert!
# rpm -q resource-agents-sap-hana
resource-agents-sap-hana-0.162.1-2.el9_2.noarch
6.3.4. Aliases for monitoring Link kopierenLink in die Zwischenablage kopiert!
You can add this to your shell profile. In the example the root aliases depend on the <sid>adm aliases, which must therefore also already be defined.
root ( add to
~/.bashrc):# export ListInstances=$(/usr/sap/hostctrl/exe/saphostctrl -function ListInstances| head -1 ) export sid=$(echo "$ListInstances" |cut -d " " -f 5| tr [A-Z] [a-z]) export SID=$(echo $sid | tr [a-z] [A-Z]) export Instance=$(echo "$ListInstances" |cut -d " " -f 7 ) alias crmm='watch -n 1 crm_mon -1Arf' alias crmv='watch -n 1 /usr/local/bin/crmmv' alias cglo='su - ${sid}adm -c cglo' alias cdh='cd /usr/lib/ocf/resource.d/heartbeat' alias gtr='su - ${sid}adm -c gtr' alias hdb='su - ${sid}adm -c hdb' alias hdbi='su - ${sid}adm -c hdbi' alias hgrep='history | grep $1' alias hri='su - ${sid}adm -c hri' alias hris='su - ${sid}adm -c hris' alias killnode="echo 'b' > /proc/sysrq-trigger" alias lhc='su - ${sid}adm -c lhc' alias pit='ssh pitunnel' alias python='/usr/sap/${SID}/HDB${Instance}/exe/Python/bin/python' alias srstate='su - ${sid}adm -c srstate' alias shr='watch -n 5 "SAPHanaSR-monitor --sid=${SID}"' alias sgsi='su - ${sid}adm -c sgsi' alias srm='su - ${sid}adm -c srm' alias srs='su - ${sid}adm -c srs' alias sapstart='su - ${sid}adm -c sapstart' alias sapstop='su - ${sid}adm -c sapstop' alias tma='tmux attach -t `tmux ls | grep -v atta| head -1 |cut -d " " -f 1`' alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SID}_HDB${Instance}|sr_register|WAITING4 LPA|EXCLUDE as possible takeover node|SAPHanaSR|failed|${HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStop ped|FAILED"' alias tmss='tail -1000f /var/log/messages | grep -v systemd| egrep -s "secondary with sync status|Setting master-rsc_SAPHa na_${SID}_HDB${Instance}|sr_register|WAITING4LPA|EXCLUDE as possible takeover node|SAPHanaSR|failed|${HOSTNAME}|PROMOTED|DE MOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED"' alias tmm='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SID}_HDB${Instance}|sr_register|WAITING4 LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon"| grep -v systemd' alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SID}_HDB${Instance}|sr_register|WAITING 4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon"' alias vih='vim /usr/lib/ocf/resource.d/heartbeat/SAPHanaStart' alias vglo='su - ${sid}adm -c vglo'<sid>adm( add to~/.customer.sh):alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SAPSYSTEMNAME}_HDB${TINSTANCE}|sr_register|WAITING4LPA|EXCLUDE as possible takeover node|SAPHanaSR|failed|${HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED"' alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_${SAPSYSTEMNAME}_HDB${TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT"' alias sapstart='sapcontrol -nr ${TINSTANCE} -function StartSystem HDB;hdbi' alias sapstop='sapcontrol -nr ${TINSTANCE} -function StopSystem HDB;hdbi' alias sgsi='watch sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList' alias spl='watch sapcontrol -nr ${TINSTANCE} -function GetProcessList' alias splh='watch "sapcontrol -nr ${TINSTANCE} -function GetProcessList| grep hdbdaemon"' alias srm='watch "hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode"' alias srs="watch -n 5 'python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status \$?'" alias srstate='watch -n 10 hdbnsutil -sr_state' alias hdb='watch -n 5 "sapcontrol -nr ${TINSTANCE} -function GetProcessList| egrep -s hdbdaemon\|hdbnameserver\|hdbindexserver "' alias hdbi='watch -n 5 "sapcontrol -nr ${TINSTANCE} -function GetProcessList| egrep -s hdbdaemon\|hdbnameserver\|hdbindexserver;sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList "' alias hgrep='history | grep $1' alias vglo="vim /usr/sap/${SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini" alias vgloh="vim /hana/shared/${SAPSYSTEMNAME}/HDB${TINSTANCE}/${HOSTNAME}/global.ini" alias hri='hdbcons -e hdbindexserver "replication info"' alias hris='hdbcons -e hdbindexserver "replication info" | egrep -e "SiteID|ReplicationStatus_"' alias gtr='watch -n 10 /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/Python/bin/python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/getTakeoverRecommendation.py --sapcontrol=1' alias lhc='/usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/Python/bin/python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/landscapeHostConfiguration.py;echo $?'