Expand|
테스트 제목
|
기본 전환은 클러스터 노드로 돌아갑니다.
장애 조치(Failback) 및 클러스터를 다시 활성화합니다.
세 번째 사이트를 보조로 다시 등록합니다.
|
|
테스트 사전 조건
| -
SAP HANA 기본 노드는 세 번째 사이트에서 실행되고 있습니다.
-
클러스터가 부분적으로 실행되고 있습니다.
-
클러스터는
유지 관리 모드로 전환됩니다.
-
이전 클러스터 기본은 액세스할 수 있습니다.
|
|
테스트 단계
|
클러스터의 예상 기본 사항을 확인합니다.
DC3 노드에서 DC1 노드로의 페일오버.
이전 보조가 새 기본 설정으로 전환되었는지 확인합니다.
remotehost3을 새 보조 서버로 다시 등록합니다.
클러스터 maintenance-mode=false 를 설정하고 클러스터가 계속 작동합니다.
|
|
테스트 모니터링
|
새 기본 시작에서 다음을 수행합니다.
remotehost3:rh2adm> watch python $DIR_EXECU Cryostat/python_support/systemReplicationStatus.py [root@clusternode1]# watch pcs status --full on the secondary start: clusternode:rh2adm> watch hdbnsutil -sr_state
|
|
테스트 시작
|
클러스터의 예상 기본 사항 확인: [root@clusternode1]# pcs resource
VIP 및 승격된 SAP HANA 리소스는 잠재적인 새로운 기본 노드인 동일한 노드에서 실행해야 합니다.
이 잠재적인 기본 실행에서 clusternode1:rh2adm> hdbnsutil -sr_takeover
이전 기본 사항을 새 보조로 다시 등록합니다.
clusternode1:rh2adm> hdbnsutil -sr_register \ --remoteHost=clusternode1 \ --remoteInstance=${TINSTANCE} \ --replicationMode=syncmem \ --name=DC3 \ --remoteName=DC1 \ --operationMode=logreplay \ --force_full_replica \ --online
maintenance-mode=false 를 설정한 후 클러스터는 계속 작동합니다.
|
|
예상 결과
|
새로운 기본은 SAP HANA를 시작하는 것입니다.
복제 상태는 복제된 모든 3개의 사이트를 표시합니다.
두 번째 클러스터 사이트는 자동으로 새 기본 항목에 다시 등록됩니다.
Disaster Recovery (DR) 사이트에서는 데이터베이스의 추가 복제본이 됩니다.
|
|
초기 상태로 돌아가는 방법
|
테스트 3을 실행합니다.
|
자세한 설명
클러스터가 maintenance-mode 로 설정되어 있는지 확인합니다.
pcs property config maintenance-mode
Cluster Properties:
maintenance-mode: true
[root@clusternode1]# pcs property config maintenance-mode
Cluster Properties:
maintenance-mode: true
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
maintenance-mode 가 true가 아닌 경우 다음을 사용하여 설정할 수 있습니다.
pcs property set maintenance-mode=true
[root@clusternode1]# pcs property set maintenance-mode=true
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
시스템 복제 상태를 확인하고 모든 노드에서 기본 데이터베이스를 검색합니다. 먼저 다음을 사용하여 주 데이터베이스를 검색합니다.
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
출력은 다음과 같아야 합니다.
clusternode1에서 다음을 수행합니다.
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2에서 다음을 수행합니다.
clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3에서 다음을 수행합니다.
remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: primary
remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: primary
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
세 개의 노드 모두에서 기본 데이터베이스는 remotehost3입니다. 이 기본 데이터베이스에서는 세 개의 노드 모두에 대해 시스템 복제 상태가 활성이고 반환 코드는 15여야 합니다.
remotehost3:rh2adm> python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
status system replication site "1": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 3
site name: DC3
[rh2adm@remotehost3: python_support]# echo $?
15
remotehost3:rh2adm> python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
status system replication site "1": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 3
site name: DC3
[rh2adm@remotehost3: python_support]# echo $?
15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
-
세 가지
sr_states 가 모두 일관되게 일치하는지 확인합니다.
세 개의 노드 hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode:
clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode
clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
clusternode2:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
출력은 모든 노드에서 동일해야 합니다.
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1 시작 시 다음을 실행합니다.
clusternode1:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
clusternode1:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3에서 다음을 시작합니다.
remotehost3:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
remotehost3:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2 start에서 다음을 시작합니다.
clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode"
clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1로 장애 조치하려면 clusternode1에서 시작합니다.
clusternode1:rh2adm> hdbnsutil -sr_takeover
done.
clusternode1:rh2adm> hdbnsutil -sr_takeover
done.
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1의 모니터가 다음과 같이 변경됩니다.
Every 2.0s: python systemReplicationStatus.py; echo $? clusternode1: Mon Sep 4 23:34:30 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
15
Every 2.0s: python systemReplicationStatus.py; echo $? clusternode1: Mon Sep 4 23:34:30 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
반환 코드 15도 중요합니다. clusternode2의 모니터가 다음과 같이 변경됩니다.
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC2=logreplay
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
DC3는 사라졌으며 다시 등록해야 합니다. remotehost3에서 systemReplicationStatus는 오류를 보고하고 반환 코드가 11로 변경되었습니다.
pcs resource enable vip_RH2_02_MASTER
Warning: 'vip_RH2_02_MASTER' is unmanaged
[root@clusternode1]# pcs resource enable vip_RH2_02_MASTER
Warning: 'vip_RH2_02_MASTER' is unmanaged
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
maintenance-mode=false 를 제외하고 클러스터가 리소스를 시작하지 않기 때문에 경고가 적합합니다.
유지 관리 모드를 중지하기 전에 별도의 창에서 두 개의 모니터를 시작하여 변경 사항을 확인해야 합니다. clusternode2에서 다음을 실행합니다.
watch pcs status --full
[root@clusternode2]# watch pcs status --full
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1에서 다음을 실행합니다.
clusternode1:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo $?"
clusternode1:rh2adm> watch "python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo $?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
이제 다음을 실행하여 clusternode1에서 maintenance-mode 를 설정할 수 있습니다.
pcs property set maintenance-mode=false
[root@clusternode1]# pcs property set maintenance-mode=false
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2의 모니터에 모든 것이 예상대로 실행 중임을 표시해야 합니다.
Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1
(1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:01:17 2023
* Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1
(1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1
(1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693872030
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1
: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1
(1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:01:17 2023
* Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1
(1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1
(1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693872030
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1
: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
수동 상호 작용 후 ClusterCleanup에 설명된 대로 클러스터를 정리하는 것이 좋습니다.
-
remotehost3을 clusternode1의 새 기본 버전으로 다시 등록합니다.
Remotehost3을 다시 등록해야 합니다. 진행 상황을 모니터링하려면 clusternode1에서 시작하십시오.
clusternode1:rh2adm> watch -n 5 'python
/usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $?'
clusternode1:rh2adm> watch -n 5 'python
/usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $?'
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3에서 다음을 시작하십시오.
remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'
remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
이제 다음 명령을 사용하여 remotehost3을 다시 등록할 수 있습니다.
remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=${TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online
remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=${TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1의 모니터가 다음과 같이 변경됩니다.
Every 5.0s: python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $? clusternode1: Tue Sep 5 00:14:40 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "3": ACTIVE
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
Status 15
Every 5.0s: python /usr/sap/$SAPSYSTEMNAME/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $? clusternode1: Tue Sep 5 00:14:40 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "3": ACTIVE
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
Status 15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3의 모니터가 다음과 같이 변경됩니다.
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
이제 3개의 항목이 다시 있고 remotehost3(DC3)이 clusternode1(DC1)에서 복제된 보조 사이트입니다.
-
모든 노드가 clusternode1의 시스템 복제 상태에 있는지 확인합니다.
세 개의 노드 hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode:
clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode
clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
모든 노드에서 동일한 출력을 얻을 수 있어야 합니다.
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
-
pcs status --full 및 SOK 를 확인합니다. 다음을 실행합니다.
pcs status --full| grep sync_state
[root@clusternode1]# pcs status --full| grep sync_state
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
출력은 PRIM 또는 SOK 여야 합니다.
* hana_rh2_sync_state : PRIM
* hana_rh2_sync_state : SOK
* hana_rh2_sync_state : PRIM
* hana_rh2_sync_state : SOK
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
마지막으로, sync_state PRIM 및 SOK 를 포함하여 클러스터 상태가 다음과 같아야 합니다.
pcs status --full
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1
(1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:18:52 2023
* Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1
(1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1
(1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693873014
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1
: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
[root@clusternode1]# pcs status --full
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1
(1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:18:52 2023
* Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1
(1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1
(1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693873014
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1
: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
클러스터 상태 확인 및 데이터베이스 확인 을 참조하여 모든 작업이 제대로 작동하는지 확인합니다.