クラスターが maintenance-mode になっているか確認します。
pcs property config maintenance-mode
Cluster Properties:
maintenance-mode: true
[root@clusternode1]# pcs property config maintenance-mode
Cluster Properties:
maintenance-mode: true
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
maintenance-mode が true でない場合は、以下で設定できます。
pcs property set maintenance-mode=true
[root@clusternode1]# pcs property set maintenance-mode=true
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
システムのレプリケーションステータスを確認し、すべてのノード上のプライマリーデータベースを検出します。
まず、次を使用してプライマリーデータベースを検出します。
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
出力は、以下のようになります。
clusternode1 上
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2 上
clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: syncmem
primary masters: remotehost3
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3 上
remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: primary
remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters"
mode: primary
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
3 つのノードすべてで、プライマリーデータベースは remotehost3 です。
このプライマリーデータベースでは、システムレプリケーションステータスが 3 つのノードすべてでアクティブであり、戻りコードが 15 であることを確認する必要があります。
remotehost3:rh2adm> python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
status system replication site "1": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 3
site name: DC3
[rh2adm@remotehost3: python_support]# echo $?
15
remotehost3:rh2adm> python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
status system replication site "1": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 3
site name: DC3
[rh2adm@remotehost3: python_support]# echo $?
15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
3 つの sr_state がすべて一貫しているか確認します。
3 つのノードすべてで、hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode を実行してください。
clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
出力はすべてのノードで同じになるはずです。
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=async
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
別のウィンドウでモニタリングを開始します。
clusternode1 で、以下を開始します。
clusternode1:rh2adm> watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
clusternode1:rh2adm> watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3 で、以下を開始します。
remotehost3:rh2adm>watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
remotehost3:rh2adm>watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \$?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode2 で、以下を開始します。
clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode"
clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
テストを開始します。
clusternode1 にフェイルオーバーするには、clusternode1 で開始します。
clusternode1:rh2adm> hdbnsutil -sr_takeover
done.
clusternode1:rh2adm> hdbnsutil -sr_takeover
done.
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
モニターの出力を確認してください。
clusternode1 のモニターは、次のように変更されます。
Every 2.0s: python systemReplicationStatus.py; echo $? clusternode1: Mon Sep 4 23:34:30 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
15
Every 2.0s: python systemReplicationStatus.py; echo $? clusternode1: Mon Sep 4 23:34:30 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
戻りコード 15 も重要です。
clusternode2 のモニターは、次のように変更されます。
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC2=logreplay
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
DC3 は失われているため、再登録する必要があります。
remotehost3 では、systemReplicationStatus がエラーを報告し、リターンコードが 11 に変更されます。
クラスターノードが再登録されているか確認します。
clusternode1:rh2adm> hdbnsutil -sr_state
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~
online: true
mode: primary
operation mode: primary
site id: 1
site name: DC1
is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: true
is a takeover active: false
is primary suspended: false
Host Mappings:
~~~~~~~~~~~~~~
clusternode1 -> [DC2] clusternode2
clusternode1 -> [DC1] clusternode1
Site Mappings:
~~~~~~~~~~~~~~
DC1 (primary/primary)
|---DC2 (syncmem/logreplay)
Tier of DC1: 1
Tier of DC2: 2
Replication mode of DC1: primary
Replication mode of DC2: syncmem
Operation mode of DC1: primary
Operation mode of DC2: logreplay
Mapping: DC1 -> DC2
done.
clusternode1:rh2adm> hdbnsutil -sr_state
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~
online: true
mode: primary
operation mode: primary
site id: 1
site name: DC1
is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: true
is a takeover active: false
is primary suspended: false
Host Mappings:
~~~~~~~~~~~~~~
clusternode1 -> [DC2] clusternode2
clusternode1 -> [DC1] clusternode1
Site Mappings:
~~~~~~~~~~~~~~
DC1 (primary/primary)
|---DC2 (syncmem/logreplay)
Tier of DC1: 1
Tier of DC2: 2
Replication mode of DC1: primary
Replication mode of DC2: syncmem
Operation mode of DC1: primary
Operation mode of DC2: logreplay
Mapping: DC1 -> DC2
done.
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
サイトマッピングには、clusternode2 (DC2) が再登録されたことが示されています。
vip リソースを確認または有効にします。
pcs resource
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged):
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged)
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged)
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged)
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged)
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged)
[root@clusternode1]# pcs resource
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged):
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged)
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged)
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged)
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged)
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged)
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
vip リソース vip_RH2_02_MASTER が停止されています。
再度開始するには、次のコマンドを実行します。
pcs resource enable vip_RH2_02_MASTER
Warning: 'vip_RH2_02_MASTER' is unmanaged
[root@clusternode1]# pcs resource enable vip_RH2_02_MASTER
Warning: 'vip_RH2_02_MASTER' is unmanaged
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
クラスターは、maintenance-mode=false でない限りリソースを開始しないため、この警告は正しいです。
クラスター maintenance-mode を停止します。
maintenance-mode を停止する前に、別のウィンドウで 2 つのモニターを起動して、変更を確認する必要があります。
clusternode2 で、次を実行します。
watch pcs status --full
[root@clusternode2]# watch pcs status --full
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1 で、次を実行します。
clusternode1:rh2adm> watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo $?"
clusternode1:rh2adm> watch "python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo $?"
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
これで、次のコマンドを実行して、clusternode1 の maintenance-mode の設定を解除できます。
pcs property set maintenance-mode=false
[root@clusternode1]# pcs property set maintenance-mode=false
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1 のモニターには、すべてが期待どおりに実行されていることが表示されます。
Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:01:17 2023
* Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1 (1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1 (1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693872030
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:01:17 2023
* Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1 (1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1 (1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693872030
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
手動操作の後は、クラスターのクリーンアップ で説明されているように、クラスターをクリーンアップすることを推奨します。
clusternode1 上の新しいプライマリーに remotehost3 を再登録します。
remotehost3 を再登録する必要があります。進行状況を監視するには、clusternode1 で開始してください。
con_cluster_cleanupclusternode1:rh2adm> watch -n 5 'python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $?'
con_cluster_cleanupclusternode1:rh2adm> watch -n 5 'python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $?'
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
remotehost3 で、以下を開始してください。
remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'
remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
これで、このコマンドを使用して、remotehost3 を再登録できます。
remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=${TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online
remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=${TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
clusternode1 のモニターは、次のように変更されます。
Every 5.0s: python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $? clusternode1: Tue Sep 5 00:14:40 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "3": ACTIVE
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
Status 15
Every 5.0s: python /usr/sap/${SAPSYSTEMNAME}/HDB${TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status $? clusternode1: Tue Sep 5 00:14:40 2023
|Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary |
| | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced |
|-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True |
|SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
|RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True |
status system replication site "3": ACTIVE
status system replication site "2": ACTIVE
overall system replication status: ACTIVE
Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mode: PRIMARY
site id: 1
site name: DC1
Status 15
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
そして、remotehost3 のモニターは次のように変更されます。
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
ここで再び 3 つのエントリーがあり、remotehost3 (DC3) が再び clusternode1 (DC1) から複製されたセカンダリーサイトになります。
すべてのノードが、clusternode1 上のシステムレプリケーションステータスの一部であるか確認します。
3 つのノードすべてで、hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode を実行してください。
clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode
clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
すべてのノードで同じ出力が得られるはずです。
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
siteReplicationMode/DC1=primary
siteReplicationMode/DC3=syncmem
siteReplicationMode/DC2=syncmem
siteOperationMode/DC1=primary
siteOperationMode/DC3=logreplay
siteOperationMode/DC2=logreplay
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
pcs ステータス --full および SOK を確認します。
以下を実行します。
pcs status --full| grep sync_state
[root@clusternode1]# pcs status --full| grep sync_state
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
出力は PRIM または SOK のいずれかである必要があります。
* hana_rh2_sync_state : PRIM
* hana_rh2_sync_state : SOK
* hana_rh2_sync_state : PRIM
* hana_rh2_sync_state : SOK
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
最後に、sync_state PRIM と SOK を含むクラスターのステータスは次のようになります。
pcs status --full
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:18:52 2023
* Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1 (1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1 (1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693873014
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
[root@clusternode1]# pcs status --full
Cluster name: cluster1
Cluster Summary:
* Stack: corosync
* Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum
* Last updated: Tue Sep 5 00:18:52 2023
* Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ clusternode1 (1) clusternode2 (2) ]
Full List of Resources:
* auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1
* Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]:
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2
* SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1
* Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable):
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2
* SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1
* vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1
Node Attributes:
* Node: clusternode1 (1):
* hana_rh2_clone_state : PROMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode2
* hana_rh2_roles : 4:P:master1:master:worker:master
* hana_rh2_site : DC1
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : PRIM
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode1
* lpa_rh2_lpt : 1693873014
* master-SAPHana_RH2_02 : 150
* Node: clusternode2 (2):
* hana_rh2_clone_state : DEMOTED
* hana_rh2_op_mode : logreplay
* hana_rh2_remoteHost : clusternode1
* hana_rh2_roles : 4:S:master1:master:worker:master
* hana_rh2_site : DC2
* hana_rh2_sra : -
* hana_rh2_srah : -
* hana_rh2_srmode : syncmem
* hana_rh2_sync_state : SOK
* hana_rh2_version : 2.00.062.00
* hana_rh2_vhost : clusternode2
* lpa_rh2_lpt : 30
* master-SAPHana_RH2_02 : 100
Migration Summary:
Tickets:
PCSD Status:
clusternode1: Online
clusternode2: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Copy to Clipboard
Copied!
Toggle word wrap
Toggle overflow
-
すべてが再び正常に動作することを確認するには、クラスターのステータスの確認 および データベースの確認 を参照してください。