第 6 章 有用的命令
以下是有用的命令的 3 部分。在大多数情况下,它应该有助于验证操作或配置是否成功。示例与响应一起列出。在某些情况下,出于格式化原因,输出已被调整。
-
当由 <
sid>adm 用户以 >
开头时,此文档中列出的所有命令均以 > 开头。 -
root 用户运行
的所有命令都以#
开头。 -
要执行命令,请省略 prefix > 或
#
。
6.1. SAP HANA 命令
SAP HANA 命令由 < sid>adm
用户执行。Example:
[root@clusternode1]# su - rh2adm clusternode1:rh2adm> cdpy clusternode1:rh2adm> pwd /usr/sap/RH2/HDB02/exe/python_support clusternode1:rh2adm> python systemReplicationStatus.py -h systemReplicationStatus.py [-h|--help] [-a|--all] [-l|--localhost] [-m|--multiTaget] [-s|--site=<site name>] [-t|--printLandscapeTree] [--omitSecondaryActiveStatus] [--sapcontrol=1] clusternode1:rh2adm> python landscapeHostConfiguration.py -h landscapeHostConfiguration.py [-h|--help] [--localhost] [--sapcontrol=1] clusternode1:rh2adm> hdbnsutil # run hdbnsutil without parameters to get help
6.1.1. 使用 hdbclm
进行 SAP HANA 安装
第三个站点的安装与第二个站点的安装类似。安装可以使用 hdblcm
作为用户 root 来完成。为确保之前未安装任何内容,请运行 hdbuninst
以检查 SAP HANA 是否尚未安装在此节点上。
HANA 卸载的输出示例:
[root@remotehost3]# cd /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64 root@DC3/software/DATA_UNITS/HDB_SERVER_LINUX_X86_64# ./hdbuninst Option 0 will remove an already existing HANA Installation No SAP HANA Installation found is the expected answer
DC3 上 HANA 安装的输出示例:
----[root@remotehost3]# cd /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64 # ./hdbuninst Option 0 will remove an already existing HANA Installation No SAP HANA Installation found is the expected answer ---- Example output of HANA installation: [source,text] ---- [root@remotehost3]# ./hdblcm 1 install 2 server /hana/shared is default directory Enter Local Hostname [remotehost3]: use the default name additional hosts only during Scale-Out Installation y default is n ENTER SAP HANA System ID: RH2 Enter Instance Number [02]: Enter Local Host Worker Group [default]: Select System Usage / Enter Index [4]: Choose encryption Enter Location of Data Volumes [/hana/data/RH2]: Enter Location of Log Volumes [/hana/log/RH2]: Restrict maximum memory allocation? [n]: Enter Certificate Host Name Enter System Administrator (rh2adm) Password: <Y0urPasswd> Confirm System Administrator (rh2adm) Password: <Y0urPasswd> Enter System Administrator Home Directory [/usr/sap/RH2/home]: Enter System Administrator Login Shell [/bin/sh]: Enter System Administrator User ID [1000]: Enter System Database User (SYSTEM) Password: <Y0urPasswd> Confirm System Database User (SYSTEM) Password: <Y0urPasswd> Restart system after machine reboot? [n]: ----
在安装开始前,会列出概述:
SAP HANA Database System Installation Installation Parameters Remote Execution: ssh Database Isolation: low Install Execution Mode: standard Installation Path: /hana/shared Local Host Name: dc3host SAP HANA System ID: RH2 Instance Number: 02 Local Host Worker Group: default System Usage: custom Location of Data Volumes: /hana/data/RH2 Location of Log Volumes: /hana/log/RH2 SAP HANA Database secure store: ssfs Certificate Host Names: remotehost3 -> remotehost3 System Administrator Home Directory: /usr/sap/RH2/home System Administrator Login Shell: /bin/sh System Administrator User ID: 1000 ID of User Group (sapsys): 1010 Software Components SAP HANA Database Install version 2.00.052.00.1599235305 Location: /software/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server SAP HANA Local Secure Store Do not install SAP HANA AFL (incl.PAL,BFL,OFL) Do not install SAP HANA EML AFL Do not install SAP HANA EPM-MDS Do not install SAP HANA Database Client Do not install SAP HANA Studio Do not install SAP HANA Smart Data Access Do not install SAP HANA XS Advanced Runtime Do not install Log File Locations Log directory: /var/tmp/hdb_RH2_hdblcm_install_2021-06-09_18.48.13 Trace location: /var/tmp/hdblcm_2021-06-09_18.48.13_31307.trc Do you want to continue? (y/n):
输入 y 开始安装。
6.1.2. 使用 hdbsql
检查 Inifile
内容
clusternode1:rh2adm> hdbsql -i ${TINSTANCE} -u system -p Y0urP8ssw0rd Welcome to the SAP HANA Database interactive terminal. Type: \h for help with commands \q to quit hdbsql RH2=> select * from M_INIFILE_CONTENTS where section='system_replication' FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE "global.ini","DEFAULT","","","system_replication","actual_mode","primary" "global.ini","DEFAULT","","","system_replication","mode","primary" "global.ini","DEFAULT","","","system_replication","operation_mode","logreplay" "global.ini","DEFAULT","","","system_replication","register_secondaries_on_takeover ","true" "global.ini","DEFAULT","","","system_replication","site_id","1" "global.ini","DEFAULT","","","system_replication","site_name","DC2" "global.ini","DEFAULT","","","system_replication","timetravel_logreplay_mode","auto " "global.ini","DEFAULT","","","system_replication","alternative_sources","" "global.ini","DEFAULT","","","system_replication","datashipping_logsize_threshold", "5368709120" "global.ini","DEFAULT","","","system_replication","datashipping_min_time_interval", "600" "global.ini","DEFAULT","","","system_replication","datashipping_parallel_channels", "4" "global.ini","DEFAULT","","","system_replication","datashipping_parallel_processing ","true" "global.ini","DEFAULT","","","system_replication","datashipping_snapshot_max_retent ion_time","300" "global.ini","DEFAULT","","","system_replication","enable_data_compression","false" "global.ini","DEFAULT","","","system_replication","enable_full_sync","false" "global.ini","DEFAULT","","","system_replication","enable_log_compression","false" "global.ini","DEFAULT","","","system_replication","enable_log_retention","auto" "global.ini","DEFAULT","","","system_replication","full_replica_on_failed_delta_syn c_check","false" "global.ini","DEFAULT","","","system_replication","hint_based_routing_site_name","" "global.ini","DEFAULT","","","system_replication","keep_old_style_alert","false" "global.ini","DEFAULT","","","system_replication","logshipping_async_buffer_size"," 67108864" "global.ini","DEFAULT","","","system_replication","logshipping_async_wait_on_buffer _full","true" "global.ini","DEFAULT","","","system_replication","logshipping_max_retention_size", "1048576" "global.ini","DEFAULT","","","system_replication","logshipping_replay_logbuffer_cac he_size","1073741824" "global.ini","DEFAULT","","","system_replication","logshipping_replay_push_persiste nt_segment_count","5" "global.ini","DEFAULT","","","system_replication","logshipping_snapshot_logsize_thr eshold","3221225472" "global.ini","DEFAULT","","","system_replication","logshipping_snapshot_min_time_in terval","900" "global.ini","DEFAULT","","","system_replication","logshipping_timeout","30" "global.ini","DEFAULT","","","system_replication","preload_column_tables","true" "global.ini","DEFAULT","","","system_replication","propagate_log_retention","off" "global.ini","DEFAULT","","","system_replication","reconnect_time_interval","30" "global.ini","DEFAULT","","","system_replication","retries_before_register_to_alter native_source","20" "global.ini","DEFAULT","","","system_replication","takeover_esserver_without_log_ba ckup","false" "global.ini","DEFAULT","","","system_replication","takeover_wait_until_esserver_res tart","true" "global.ini","DEFAULT","","","system_replication","timetravel_call_takeover_hooks", "off" "global.ini","DEFAULT","","","system_replication","timetravel_log_retention_policy" ,"none" "global.ini","DEFAULT","","","system_replication","timetravel_max_retention_time"," 0" "global.ini","DEFAULT","","","system_replication","timetravel_snapshot_creation_int erval","1440" "indexserver.ini","DEFAULT","","","system_replication","logshipping_async_buffer_si ze","268435456" "indexserver.ini","DEFAULT","","","system_replication","logshipping_replay_logbuffe r_cache_size","4294967296" "indexserver.ini","DEFAULT","","","system_replication","logshipping_replay_push_per sistent_segment_count","20" 41 rows selected (overall time 1971.958 msec; server time 31.359 msec)
6.1.3. 检查数据库
检查数据库是否正在运行,并发现当前的主节点。
列出数据库实例
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList 23.06.2023 12:08:17 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus node1, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN
如果输出为绿色,则实例正在运行。
列出数据库进程
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid hdbdaemon, HDB Daemon, GREEN, Running, 2023 09 04 14:34:01, 18:41:33, 3788067 hdbcompileserver, HDB Compileserver, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445299 hdbindexserver, HDB Indexserver-RH2, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445391 hdbnameserver, HDB Nameserver, GREEN, Running, 2023 09 04 22:35:34, 10:40:00, 445178 hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445306 hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2023 09 04 22:35:53, 10:39:41, 445955 hdbxsengine, HDB XSEngine-RH2, GREEN, Running, 2023 09 04 22:35:40, 10:39:54, 445394
通常,所有数据库进程都处于 GREEN
状态。
列出 SAP HANA 进程
clusternode1:rh2adm> HDB info USER PID PPID %CPU VSZ RSS COMMAND rh2adm 1560 1559 0.0 6420 3136 watch -n 5 sapcontrol -nr 02 -functi rh2adm 1316 1315 0.0 8884 5676 -sh rh2adm 2549 1316 0.0 7516 4072 \_ /bin/sh /usr/sap/RH2/HDB02/HDB i rh2adm 2579 2549 0.0 10144 3576 \_ ps fx -U rh2adm -o user:8,pi rh2adm 2388 1 0.0 679536 55520 hdbrsutil --start --port 30203 --vo rh2adm 1921 1 0.0 679196 55312 hdbrsutil --start --port 30201 --vo rh2adm 1469 1 0.0 8852 3260 sapstart pf=/usr/sap/RH2/SYS/profile rh2adm 1476 1469 0.7 438316 86288 \_ /usr/sap/RH2/HDB02/remotehost3/trace/ rh2adm 1501 1476 11.7 9690172 1574796 \_ hdbnameserver rh2adm 1845 1476 0.8 410696 122988 \_ hdbcompileserver rh2adm 1848 1476 1.0 659464 154072 \_ hdbpreprocessor rh2adm 1899 1476 14.7 9848276 1765208 \_ hdbindexserver -port 30203 rh2adm 1902 1476 8.4 5023288 1052768 \_ hdbxsengine -port 30207 rh2adm 2265 1476 5.2 2340284 405016 \_ hdbwebdispatcher rh2adm 1117 1 1.1 543532 30676 /usr/sap/RH2/HDB02/exe/sapstartsrv p rh2adm 1029 1 0.0 20324 11572 /usr/lib/systemd/systemd --user rh2adm 1030 1029 0.0 23256 3536 \_ (sd-pam)
显示 SAP HANA landscape 配置
clusternode1:rh2adm> /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/Python/bin/python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/landscapeHostConfiguration.py;echo $? | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | ------ | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------ | ------ | ------- | ------- | | clusternode1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | overall host status: ok 4
返回码:
- 0: fatal
- 1: error
- 2: warning
- 3: info
- 4: OK
发现主数据库
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode"
辅助检查示例:
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode" mode: syncmem primary masters: clusternode1
在当前主上检查示例:
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "primary masters|^mode" mode: primary clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay
显示数据库版本
使用 SQL 查询的示例:
hdbsql RH2=> select * from m_database SYSTEM_ID,DATABASE_NAME,HOST,START_TIME,VERSION,USAGE "RH2","RH2","node1","2023-06-22 15:33:05.235000000","2.00.059.02.1647435895","CUSTOM" 1 row selected (overall time 29.107 msec; server time 927 usec)
使用 systemOverview.py
的示例:
clusternode1:rh2adm> python ./systemOverview.py | Section | Name | Status | Value | | ---------- | --------------- | ------- | --------------------------------------------------- | | System | Instance ID | | RH2 | | System | Instance Number | | 02 | | System | Distributed | | No | | System | Version | | 2.00.059.02.1647435895 (fa/hana2sp05) | | System | Platform | | Red Hat Enterprise Linux 9.2 Beta (Plow) 9.2 (Plow) | | Services | All Started | OK | Yes | | Services | Min Start Time | | 2023-07-14 16:31:19.000 | | Services | Max Start Time | | 2023-07-26 11:23:17.324 | | Memory | Memory | OK | Physical 31.09 GB, Swap 10.00 GB, Used 26.38 | | CPU | CPU | OK | Available 4, Used 1.04 | | Disk | Data | OK | Size 89.0 GB, Used 59.3 GB, Free 33 % | | Disk | Log | OK | Size 89.0 GB, Used 59.3 GB, Free 33 % | | Disk | Trace | OK | Size 89.0 GB, Used 59.3 GB, Free 33 % | | Statistics | Alerts | WARNING | cannot check statistics w/o SQL connection |
6.1.4. 启动和停止 SAP HANA
选项 1:HDB 命令
clusternode1:rh2adm> HDB help Usage: /usr/sap/RH2/HDB02/HDB { start|stop|reconf|restart|version|info|proc|admin|kill|kill-<sig>|term } kill or kill-9 should never be used in a productive environment!
启动数据库
clusternode1:rh2adm> HDB start
停止数据库
clusternode1:rh2adm> HDB stop
选项 2 (推荐):使用 sapcontrol
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function StartSystem HDB 03.07.2023 14:08:30 StartSystem OK
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function StopSystem HDB 03.07.2023 14:09:33 StopSystem OK
使用 GetProcessList 来监控 HANA 服务的启动和停止:
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList
6.1.5. 检查 SAP HANA System Replication 状态
有多种方法可以检查 SAP HANA 系统复制状态:
- 'clusternode1:rh2adm> python systemReplicationStatus.py ' on the primary node
-
clusternode1:rh2adm> echo $? #
(Return code of systemReplicationStatus) -
clusternode1:rh2adm> hdbnsutil -sr_state
-
clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration
作为监控器运行的 systemReplicationStatus.py
输出示例:
clusternode1:rh2adm> watch -n 5 "python /usr/sap/${SAPSYSTEMNAME}/HDB{TINSTACE}/exe/python_support/systemReplicationStatus.py;echo \$?" concurrent-fencing: true Every 5.0s: python systemReplicationStatus.py;echo $? hana08: Fri Jul 28 17:01:05 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- | |SYSTEMDB |hana08 |30201 |nameserver | 1 | 1 |DC2 |hana09 | 30201 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | |RH2 |hana08 |30207 |xsengine | 2 | 1 |DC2 |hana09 | 30207 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | |RH2 |hana08 |30203 |indexserver | 3 | 1 |DC2 |hana09 | 30203 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | |SYSTEMDB |hana08 |30201 |nameserver | 1 | 1 |DC2 |remotehost3 | 30201 | 2 |DC1 |YES |SYNCMEM |ACTIVE | | |RH2 |hana08 |30207 |xsengine | 2 | 1 |DC2 |remotehost3 | 30207 | 2 |DC1 |YES |SYNCMEM |ACTIVE | | |RH2 |hana08 |30203 |indexserver | 3 | 1 |DC2 |remotehost3 | 30203 | 2 |DC1 |YES |SYNCMEM |ACTIVE | | status system replication site "3": ACTIVE status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC2 15
返回代码的预期结果为:
- 10: NoHSR
- 11: error
- 12: unknown
- 13: 初始化
- 14: 同步
- 15: active
在大多数情况下,系统复制检查将返回返回码 15
。另一个显示选项是使用 -t
(printLandscapeTree)。
当前主机上的输出示例:
clusternode1:rh2adm> python systemReplicationStatus.py -t HANA System Replication landscape: DC1 ( primary ) | --- DC3 ( syncmem ) | --- DC2 ( syncmem )
hdbnsutil -sr_state
示例:
[root@clusternode1]# su - rh2adm clusternode1:rh2adm> watch -n 10 hdbnsutil -sr_state Every 10.0s: hdbnsutil -sr_state clusternode1: Thu Jun 22 08:42:00 2023 System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: syncmem operation mode: logreplay site id: 2 site name: DC1 is source system: false is secondary/consumer system: true has secondaries/consumers attached: false is a takeover active: false is primary suspended: false is timetravel enabled: false replay mode: auto active primary site: 1 primary masters: clusternode2 Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC3] remotehost3 clusternode1 -> [DC1] clusternode1 clusternode1 -> [DC2] clusternode2 Site Mappings: ~~~~~~~~~~~~~~ DC2 (primary/primary) |---DC3 (syncmem/logreplay) |---DC1 (syncmem/logreplay) Tier of DC2: 1 Tier of DC3: 2 Tier of DC1: 2 Replication mode of DC2: primary [2] 0:ssh*
主上的 sr_stateConfiguation
示例:
clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: primary site id: 2 site name: DC1 done.
二级上的 sr_stateConfiguration
示例:
clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 1 site name: DC2 active primary site: 2 primary masters: clusternode1 done.
您还可以在节点是当前主的辅助数据库中检查。在故障转移过程中,会出现两个主数据库,并且需要这些信息来确定潜在的主数据库错误,需要重新注册为次要数据库。
如需更多信息,请参阅示例 :检查 Primary 和 Secondary Systems 上的状态。
6.1.6. 注册辅助节点
为 SAP HANA 系统复制环境注册辅助数据库的先决条件:
注册示例:
clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode2 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC1 --online --operationMode not set; using default from global.ini/[system_replication]/operation_mode: logreplay adding site ... collecting information ... updating local ini files ... done.
在注册了 global.ini
文件后,将自动更新
…来自:
# global.ini last modified 2023-06-15 09:55:05.665341 by /usr/sap/RH2/HDB02/exe/hdbnsutil -initTopology --workergroup=default --set_user_system_pw [multidb] mode = multidb database_isolation = low singletenant = yes [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2
…至:
# global.ini last modified 2023-06-15 11:25:44.516946 by hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=02 --replicationMode=syncmem --name=DC1 --online [multidb] mode = multidb database_isolation = low singletenant = yes [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2 [system_replication] timetravel_logreplay_mode = auto site_id = 3 mode = syncmem actual_mode = syncmem site_name = DC1 operation_mode = logreplay [system_replication_site_masters] 1 = clusternode2:30201
6.1.7. sapcontrol
GetProcessList
检查活跃的 SAP HANA 数据库的进程
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetProcessList clusternode1: Wed Jun 7 08:23:03 2023 07.06.2023 08:23:03 GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid hdbdaemon, HDB Daemon, GREEN, Running, 2023 06 02 16:59:42, 111:23:21, 4245 hdbcompileserver, HDB Compileserver, GREEN, Running, 2023 06 02 17:01:35, 111:21:28, 7888 hdbindexserver, HDB Indexserver-RH2, GREEN, Running, 2023 06 02 17:01:36, 111:21:27, 7941 hdbnameserver, HDB Nameserver, GREEN, Running, 2023 06 02 17:01:29, 111:21:34, 7594 hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2023 06 02 17:01:35, 111:21:28, 7891 hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2023 06 02 17:01:42, 111:21:21, 8339 hdbxsengine, HDB XSEngine-RH2, GREEN, Running, 2023 06 02 17:01:36, 111:21:27, 7944
6.1.8. sapcontrol
GetInstanceList
这将列出 SAP HANA 数据库的实例状态。它还将显示端口。有三个不同的状态名称:
- GREEN (运行)
- GRAY (停止)
- YELLOW (当前正在更改)
活跃实例示例:
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList clusternode1: Wed Jun 7 08:24:13 2023 07.06.2023 08:24:13 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus remotehost3, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GREEN
停止的实例示例:
clusternode1:rh2adm> sapcontrol -nr ${TINSTANCE} -function GetSystemInstanceList 22.06.2023 09:14:55 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus remotehost3, 2, 50213, 50214, 0.3, HDB|HDB_WORKER, GRAY
6.1.9. hdbcons
示例
您还可以使用 HDB 控制台显示数据库的信息:
-
hdbcons -e hdbindexserver 'replication info'
-
hdbcons -e hdbindexserver 帮助
更多选项
'replication info' 示例:
clusternode1:rh2adm> hdbcons -e hdbindexserver 'replication info' hdbcons -p `pgrep hdbindex` 'replication info' SAP HANA DB Management Client Console (type '\?' to get help for client commands) Try to open connection to server process with PID 451925 SAP HANA DB Management Server Console (type 'help' to get help for server commands) Executable: hdbindexserver (PID: 451925) [OK] -- ## Start command at: 2023-06-22 09:05:25.211 listing default statistics for volume 3 System Replication Primary Information ====================================== System Replication Primary Configuration [system_replication] logshipping_timeout = 30 [system_replication] enable_full_sync = false [system_replication] preload_column_tables = true [system_replication] ensure_backup_history = true [system_replication_communication] enable_ssl = off [system_replication] keep_old_style_alert = false [system_replication] enable_log_retention = auto [system_replication] logshipping_max_retention_size = 1048576 [system_replication] logshipping_async_buffer_size = 268435456 - lastLogPos : 0x4ab2700 - lastLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - lastConfirmedLogPos : 0x4ab2700 - lastConfirmedLogPosTimestamp: 22.06.2023-07.05.25 (1687417525193952) - lastSavepointVersion : 1286 - lastSavepointLogPos : 0x4ab0602 - lastSavepointTimestamp : 22.06.2023-07.02.42 (1687417362853007) 2 session registered. Session index 0 - SiteID : 3 - RemoteHost : 192.168.5.137 Log Connection - ptr : 0x00007ff04c0a1000 - channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671686293528, fd=70, refCnt=2, idx=5, local=192.168.5.134/40203_tcp, remote=192.168.5.137/40406_tcp, state=Connected, pending=[r---]}}} - SSLActive : false - mode : syncmem Data Connection - ptr : 0x00007ff08b730000 - channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671436247064, fd=68, refCnt=2, idx=6, local=192.168.5.134/40203_tcp, remote=192.168.5.137/40408_tcp, state=Connected, pending=[r---]}}} - SSLActive : false Primary Statistics - Creation Timestamp : 20.06.2023-13.55.07 (1687269307772532) - Last Reset Timestamp : 20.06.2023-13.55.07 (1687269307772532) - Statistic Reset Count : 0 - ReplicationMode : syncmem - OperationMode : logreplay - ReplicationStatus : ReplicationStatus_Active - ReplicationStatusDetails : - ReplicationFullSync : DISABLED - shippedLogPos : 0x4ab2700 - shippedLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - sentLogPos : 0x4ab2700 - sentLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - sentMaxLogWriteEndPosition : 0x4ab2700 - sentMaxLogWriteEndPositionReqCnt: 0x1f6b8 - shippedLogBuffersCount : 142439 - shippedLogBuffersSize : 805855232 bytes - shippedLogBuffersSizeUsed : 449305792 bytes (55.76clusternode1:rh2adm>) - shippedLogBuffersSizeNet : 449013696 bytes (55.72clusternode1:rh2adm>) - shippedLogBufferDuration : 83898615 microseconds - shippedLogBufferDurationMin : 152 microseconds - shippedLogBufferDurationMax : 18879 microseconds - shippedLogBufferDurationSend : 7301067 microseconds - shippedLogBufferDurationComp : 0 microseconds - shippedLogBufferThroughput : 9709099.18 bytes/s - shippedLogBufferPendingDuration : 80583785 microseconds - shippedLogBufferRealThrougput : 10073190.40 bytes/s - replayLogPos : 0x4ab2700 - replayLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - replayBacklog : 0 microseconds - replayBacklogSize : 0 bytes - replayBacklogMax : 822130896 microseconds - replayBacklogSizeMax : 49455104 bytes - shippedSavepointVersion : 0 - shippedSavepointLogPos : 0x0 - shippedSavepointTimestamp : not set - shippedFullBackupCount : 0 - shippedFullBackupSize : 0 bytes - shippedFullBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedFullBackupDuration : 0 microseconds - shippedFullBackupDurationComp : 0 microseconds - shippedFullBackupThroughput : 0.00 bytes/s - shippedFullBackupStreamCount : 0 - shippedFullBackupResumeCount : 0 - shippedLastFullBackupSize : 0 bytes - shippedLastFullBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedLastFullBackupStart : not set - shippedLastFullBackupEnd : not set - shippedLastFullBackupDuration : 0 microseconds - shippedLastFullBackupStreamCount : 0 - shippedLastFullBackupResumeCount : 0 - shippedDeltaBackupCount : 0 - shippedDeltaBackupSize : 0 bytes - shippedDeltaBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedDeltaBackupDuration : 0 microseconds - shippedDeltaBackupDurationComp : 0 microseconds - shippedDeltaBackupThroughput : 0.00 bytes/s - shippedDeltaBackupStreamCount : 0 - shippedDeltaBackupResumeCount : 0 - shippedLastDeltaBackupSize : 0 bytes - shippedLastDeltaBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedLastDeltaBackupStart : not set - shippedLastDeltaBackupEnd : not set - shippedLastDeltaBackupDuration : 0 microseconds - shippedLastDeltaBackupStreamCount : 0 - shippedLastDeltaBackupResumeCount : 0 - currentTransferType : None - currentTransferSize : 0 bytes - currentTransferPosition : 0 bytes (0clusternode1:rh2adm>) - currentTransferStartTime : not set - currentTransferThroughput : 0.00 MB/s - currentTransferStreamCount : 0 - currentTransferResumeCount : 0 - currentTransferResumeStartTime : not set - Secondary sync'ed via Log Count : 1 - syncLogCount : 3 - syncLogSize : 62840832 bytes - backupHistoryComplete : 1 - backupLogPosition : 0x4a99980 - backupLogPositionUpdTimestamp : 22.06.2023-06.56.27 (0x5feb26227e7af) - shippedMissingLogCount : 0 - shippedMissingLogSize : 0 bytes - backlogSize : 0 bytes - backlogTime : 0 microseconds - backlogSizeMax : 0 bytes - backlogTimeMax : 0 microseconds - Secondary Log Connect time : 20.06.2023-13.55.31 (1687269331361049) - Secondary Data Connect time : 20.06.2023-13.55.33 (1687269333768341) - Secondary Log Close time : not set - Secondary Data Close time : 20.06.2023-13.55.31 (1687269331290050) - Secondary Log Reconnect Count : 0 - Secondary Log Failover Count : 0 - Secondary Data Reconnect Count : 1 - Secondary Data Failover Count : 0 ---------------------------------------------------------------- Session index 1 - SiteID : 2 - RemoteHost : 192.168.5.133 Log Connection - ptr : 0x00007ff0963e4000 - channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671506282520, fd=74, refCnt=2, idx=0, local=192.168.5.134/40203_tcp, remote=192.168.5.133/40404_tcp, state=Connected, pending=[r---]}}} - SSLActive : false - mode : syncmem Data Connection - ptr : 0x00007ff072c04000 - channel : {<NetworkChannelSSLFilter>={<NetworkChannelBase>={this=140671463146520, fd=75, refCnt=2, idx=1, local=192.168.5.134/40203_tcp, remote=192.168.5.133/40406_tcp, state=Connected, pending=[r---]}}} - SSLActive : false Primary Statistics - Creation Timestamp : 20.06.2023-13.55.49 (1687269349892111) - Last Reset Timestamp : 20.06.2023-13.55.49 (1687269349892111) - Statistic Reset Count : 0 - ReplicationMode : syncmem - OperationMode : logreplay - ReplicationStatus : ReplicationStatus_Active - ReplicationStatusDetails : - ReplicationFullSync : DISABLED - shippedLogPos : 0x4ab2700 - shippedLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - sentLogPos : 0x4ab2700 - sentLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - sentMaxLogWriteEndPosition : 0x4ab2700 - sentMaxLogWriteEndPositionReqCnt: 0x1f377 - shippedLogBuffersCount : 142326 - shippedLogBuffersSize : 793939968 bytes - shippedLogBuffersSizeUsed : 437675200 bytes (55.13clusternode1:rh2adm>) - shippedLogBuffersSizeNet : 437565760 bytes (55.11clusternode1:rh2adm>) - shippedLogBufferDuration : 76954026 microseconds - shippedLogBufferDurationMin : 115 microseconds - shippedLogBufferDurationMax : 19285 microseconds - shippedLogBufferDurationSend : 2951495 microseconds - shippedLogBufferDurationComp : 0 microseconds - shippedLogBufferThroughput : 10446578.53 bytes/s - shippedLogBufferPendingDuration : 73848247 microseconds - shippedLogBufferRealThrougput : 10875889.97 bytes/s - replayLogPos : 0x4ab2700 - replayLogPosTimestamp : 22.06.2023-07.05.25 (1687417525193952) - replayBacklog : 0 microseconds - replayBacklogSize : 0 bytes - replayBacklogMax : 113119944 microseconds - replayBacklogSizeMax : 30171136 bytes - shippedSavepointVersion : 0 - shippedSavepointLogPos : 0x0 - shippedSavepointTimestamp : not set - shippedFullBackupCount : 0 - shippedFullBackupSize : 0 bytes - shippedFullBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedFullBackupDuration : 0 microseconds - shippedFullBackupDurationComp : 0 microseconds - shippedFullBackupThroughput : 0.00 bytes/s - shippedFullBackupStreamCount : 0 - shippedFullBackupResumeCount : 0 - shippedLastFullBackupSize : 0 bytes - shippedLastFullBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedLastFullBackupStart : not set - shippedLastFullBackupEnd : not set - shippedLastFullBackupDuration : 0 microseconds - shippedLastFullBackupStreamCount : 0 - shippedLastFullBackupResumeCount : 0 - shippedDeltaBackupCount : 0 - shippedDeltaBackupSize : 0 bytes - shippedDeltaBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedDeltaBackupDuration : 0 microseconds - shippedDeltaBackupDurationComp : 0 microseconds - shippedDeltaBackupThroughput : 0.00 bytes/s - shippedDeltaBackupStreamCount : 0 - shippedDeltaBackupResumeCount : 0 - shippedLastDeltaBackupSize : 0 bytes - shippedLastDeltaBackupSizeNet : 0 bytes (-nanclusternode1:rh2adm>) - shippedLastDeltaBackupStart : not set - shippedLastDeltaBackupEnd : not set - shippedLastDeltaBackupDuration : 0 microseconds - shippedLastDeltaBackupStreamCount : 0 - shippedLastDeltaBackupResumeCount : 0 - currentTransferType : None - currentTransferSize : 0 bytes - currentTransferPosition : 0 bytes (0clusternode1:rh2adm>) - currentTransferStartTime : not set - currentTransferThroughput : 0.00 MB/s - currentTransferStreamCount : 0 - currentTransferResumeCount : 0 - currentTransferResumeStartTime : not set - Secondary sync'ed via Log Count : 1 - syncLogCount : 3 - syncLogSize : 61341696 bytes - backupHistoryComplete : 1 - backupLogPosition : 0x4a99980 - backupLogPositionUpdTimestamp : 22.06.2023-06.56.27 (0x5feb26227e670) - shippedMissingLogCount : 0 - shippedMissingLogSize : 0 bytes - backlogSize : 0 bytes - backlogTime : 0 microseconds - backlogSizeMax : 0 bytes - backlogTimeMax : 0 microseconds - Secondary Log Connect time : 20.06.2023-13.56.21 (1687269381053599) - Secondary Data Connect time : 20.06.2023-13.56.27 (1687269387399610) - Secondary Log Close time : not set - Secondary Data Close time : 20.06.2023-13.56.21 (1687269381017244) - Secondary Log Reconnect Count : 0 - Secondary Log Failover Count : 0 - Secondary Data Reconnect Count : 1 - Secondary Data Failover Count : 0 ---------------------------------------------------------------- [OK] ## Finish command at: 2023-06-22 09:05:25.212 command took: 572.000 usec -- [EXIT] -- [BYE]
帮助示例:
clusternode1:rh2adm> hdbcons -e hdbindexserver help SAP HANA DB Management Client Console (type '\?' to get help for client commands) Try to open connection to server process with PID 451925 SAP HANA DB Management Server Console (type 'help' to get help for server commands) Executable: hdbindexserver (PID: 451925) [OK] -- ## Start command at: 2023-06-22 09:07:16.784 Synopsis: help [<command name>]: Print command help - <command name> - Command name for which to display help Available commands: ae_tableload - Handle loading of column store tables and columns all - Print help and other info for all hdbcons commands authentication - Authentication management. binarysemaphore - BinarySemaphore management bye - Exit console client cd - ContainerDirectory management cfgreg - Basis Configurator checktopic - CheckTopic management cnd - ContainerNameDirectory management conditionalvariable - ConditionalVariable management connection - Connection management context - Execution context management (i.e., threads) converter - Converter management cpuresctrl - Manage cpu resources such as last-level cache allocation crash - Crash management crypto - Cryptography management (SSL/SAML/X509/Encryption). csaccessor - Display diagnostics related to the CSAccessor library ddlcontextstore - Get DdlContextStore information deadlockdetector - Deadlock detector. debug - Debug management distribute - Handling distributed systems dvol - DataVolume management ELF - ELF symbol resolution management encryption - Persistence encryption management eslog - Manipulate logger on extended storage event - Event management exit - Exit console client flightrecorder - Flight Recorder hananet - HANA-Net command interface help - Display help for a command or command list hkt - HANA Kernal Tracer (HKT) management indexmanager - Get IndexManager information, especially for IndexHandles itab - Internaltable diagnostics jexec - Information and actions for Job Executor/Scheduler licensing - Licensing management. log - Show information about logger and manipulate logger machine - Information about the machine topology mm - Memory management monitor - Monitor view command mproxy - Malloc proxy management msl - Mid size LOB management mutex - Mutex management numa - Provides NUMA statistics for all columns of a given table, broken down by column constituents like dictionary, data vector and index. nvmprovider - NVM Provider output - Command for managing output from the hdbcons page - Page management pageaccess - PageAccess management profiler - Profiler quit - Exit console client readwritelock - ReadWriteLock management replication - Monitor data and log replication resman - ResourceManager management rowstore - Row Store runtimedump - Generate a runtime dump. savepoint - Savepoint management semaphore - Semaphore management servicethreads - Thread information M_SERVICE_THREADS snapshot - Snapshot management stat - Statistics management statisticsservercontroller - StatisticsServer internals statreg - Statistics registry command syncprimi - Syncprimitive management (Mutex, CondVariable, Semaphore, BinarySemaphore, ReadWriteLock) table - Table Management tablepreload - Manage and monitor table preload trace - Trace management tracetopic - TraceTopic management transaction - Transaction management ut - UnifiedTable Management version - Version management vf - VirtualFile management x2 - get X2 info [OK] ## Finish command at: 2023-06-22 09:07:16.785 command took: 209.000 usec -- [EXIT] -- [BYE]
6.1.10. 创建 SAP HANA 备份
如果要使用 SAP HANA 系统复制,必须首先在主系统上创建一个备份。
如何执行此操作的示例为用户 < sid>adm
:
clusternode1:rh2adm> hdbsql -i ${TINSTANCE} -u system -d SYSTEMDB "BACKUP DATA USING FILE ('/hana/backup/')" clusternode1:rh2adm> hdbsql -i ${TINSTANCE} -u system -d ${SAPSYSTEMNAME} "BACKUP DATA USING FILE ('/hana/backup/')"
6.1.11. 在主数据库上启用 SAP HANA 系统复制
必须在主节点上启用 SAP HANA 系统复制。这要求首先进行备份。
clusternode1:rh2adm> hdbnsutil -sr_enable --name=DC1 nameserver is active, proceeding ... successfully enabled system as system replication source site done.
6.1.12. 将数据库密钥复制到辅助节点
数据库密钥需要从主数据库复制到次要数据库,然后才能将其注册为次要数据库。
例如:
clusternode1:rh2adm> scp -rp /usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_${SAPSYSTEMNAME}.DAT remotehost3:/usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_${SAPSYSTEMNAME}.DAT clusternode1:rh2adm> scp -rp /usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_${SAPSYSTEMNAME}.KEY remotehost3:/usr/sap/${SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_${SAPSYSTEMNAME}.KEY
6.1.13. 为 SAP HANA 系统复制注册辅助节点
请确定数据库密钥已首先复制到次要节点。然后运行注册命令:
clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC1 --remoteName=DC3 --operationMode=logreplay --online
参数描述:
-
remotehost
: 运行源(主)数据库的活动节点的主机名 -
remoteInstance
:数据库的实例数 replicationMode
:以下选项之一-
同步
:硬盘同步 -
async
: 异步复制 -
syncmem
: 内存同步
-
-
名称
:这是此复制站点的别名 -
remoteName
:源数据库的别名名称 operationMode
:以下选项之一-
delta_datashipping
:定期传输数据。接管需要更长的时间。 -
logreplay
: 在远程站点上立即为 redone 日志。接管速度更快。 -
logreplay_readaccess
:可能对第二个站点进行额外的 logreplay 只读访问。
-
6.1.14. 检查 SAP HANA 数据库的 log_mode
设置 log_mode
有两个选项:
-
log_mode=overwrite
-
log_mode=normal
:这是默认值,在数据库实例作为主要实例运行时也是必需的。使用 SAP HANA Multitarget System Replication,您必须使用log_mode=normal
。检查log_mode
的最佳方法是使用hdbsql
:
包括错误的 覆盖
条目示例:
clusternode1:rh2adm> hdbsql -i ${TINSTANCE} -d ${SAPSYSTEMNAME} -u system Password: Welcome to the SAP HANA Database interactive terminal. Type: \h for help with commands \q to quit hdbsql RH2=> select * from m_inifile_contents where key='log_mode' FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE "global.ini","DEFAULT","","","persistence","log_mode","normal" "global.ini","HOST","","node2","persistence","log_mode","overwrite" 2 rows selected (overall time 46.931 msec; server time 30.845 msec) hdbsql RH2=>exit
在这种情况下,我们有两个 global.ini
文件:
DEFAULT
-
/usr/sap/${SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini
-
HOST
-
/HANA/shared/${SAPSYSTEMNAME}/HDB${TINSTANCE}/${HOSTNAME}/global.ini
TheHOST
值覆盖DEFAULT
值。您还可以在数据库启动前检查这两个文件,然后再次使用hdbsql
验证正确的设置。您可以通过编辑 global.ini 文件来更改log_mode
。
-
Example:
clusternode1:rh2adm> vim /hana/shared/${SAPSYSTEMNAME}/HDB${TINSTANCE}/${HOSTNAME}/global.ini # global.ini last modified 2023-04-06 16:15:03.521715 by hdbnameserver [persistence] log_mode = overwrite
# global.ini last modified 2023-04-06 16:15:03.521715 by hdbnameserver [persistence] log_mode = normal
检查或更新 global.ini
文件后,验证 log_mode
值:
clusternode1:rh2adm> hdbsql -d ${SAPSYSTEMNAME} -i ${TINSTANCE} -u SYSTEM; hdbsql RH2=> select * from m_inifile_contents where section='persistence' and key='log_mode' FILE_NAME,LAYER_NAME,TENANT_NAME,HOST,SECTION,KEY,VALUE "global.ini","DEFAULT","","","persistence","log_mode","normal" "global.ini","HOST","","node2","persistence","log_mode","normal" 2 rows selected (overall time 60.982 msec; server time 20.420 msec)
部分还显示在 [persistence]
部分中需要设置此参数。当您将日志模式从 覆盖
改为 normal
时,建议您创建一个完整的数据备份,以确保数据库可以被恢复。
6.1.15. 发现主数据库
例如,有几种方法可以识别主节点:
-
pcs status | grep Promoted
-
hdbnsutil -sr_stateConfiguration
-
systemReplicationStatus.py
选项 1 - 以下 systemReplicationStatus.py
脚本示例,过滤器将返回所有节点上的主数据库位置:
clusternode1:rh2adm> /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/Python/bin/python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py --sapcontrol=1 | egrep -e "3${TINSTANCE}01/HOST|PRIMARY_MASTERS"| head -1 | awk -F"=" '{ print $2 }'
输出:
clusternode2
选项 2 - 以下示例以类似方法为所有节点显示 systemReplicationStatus
:
rh2adm>hdbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
输出:
siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay
6.1.16. 接管主
请参阅 检查复制状态 部分,以检查主节点和次要节点。另外:
- 将集群设置为 maintenance-mode
- 在辅助节点上启动接管
为集群启用 maintenance-mode
的示例:
[root@clusternode1]# pcs property set maintenance-mode=true
在成为新主的二级中,以 < sidadm>
用户身份运行:
clusternode1:rh2adm> hdbnsutil -sr_takeover
这个二级成为主要的,其他活跃二级数据库会重新注册到新主,需要手动重新注册为次要主。
6.1.17. 重新注册以前的主主作为辅助
请确定集群停止或置于 maintenance-mode
中。Example:
clusternode2:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=${TINSTANCE} --replicationMode=syncmem --name=DC2 --online --remoteName=DC3 --operationMode=logreplay --force_full_replica --online
在我们的示例中,我们使用完整复制。需要完整复制时,您的 SAP HANA 系统管理员应知道。
6.1.18. 从故障切换中恢复
请参阅 检查 SAP HANA 系统复制状态 并发现 主要节点。信息一致非常重要。如果节点不是 systemReplicationStatus.py
输出的一部分,且具有不同的系统复制状态,如果需要重新注册此节点,请检查您的数据库管理员。
解决这种情况的一种方法是 重新注册 此站点作为新次要站点。
有时,辅助实例仍会没有启动。然后,在重新注册前取消注册此站点。取消注册二级 DC1 的示例:
clusternode1:rh2adm> hdbnsutil -sr_unregister --name=DC1
重新注册 DC1 的示例:
clusternode1:rh2adm> hdbnsutil -sr_register --name=DC1 --remoteHost=node2 --remoteInstance=02 --replicationMode=sync --operationMode=logreplay --online