Search

Chapter 5. Test the cluster configuration

download PDF

5.1. Check the constraints

[root]# pcs constraint
Location Constraints:
Ordering Constraints:
start s4h_ASCS20_group then start s4h_ERS29_group (kind:Optional) (non-symmetrical)
start s4h_ASCS20_group then stop s4h_ERS29_group (kind:Optional) (non-symmetrical)
Colocation Constraints:
s4h_ERS29_group with s4h_ASCS20_group (score:-5000) Ticket Constraints:

5.2. Failover ASCS due to node crash

Before the crash, ASCS is running on s4node1 while ERS is running on s4node2.

[root@s4node1]# pcs status
...
Resource Group: s4h_ASCS20_group
s4h_fs_ascs20 (ocf::heartbeat:Filesystem): Started s4node1
s4h_vip_ascs20 (ocf::heartbeat:IPaddr2): Started s4node1
s4h_ascs20 (ocf::heartbeat:SAPInstance): Started s4node1
Resource Group: s4h_ERS29_group
s4h_fs_ers29 (ocf::heartbeat:Filesystem): Started s4node2
s4h_vip_ers29 (ocf::heartbeat:IPaddr2): Started s4node2
s4h_ers29 (ocf::heartbeat:SAPInstance): Started s4node2 ...

On s4node2, run the following command to monitor the status changes in the cluster:

[root@s4node2 ~]# crm_mon -Arf

Crash s4node1 by running the following command. Please note that connection to s4node1 will be lost after the command.

[root@s4node1 ~]# echo c > /proc/sysrq-trigger

On s4node2, monitor the failover process. After failover, cluster should be in such state, with ASCS and ERS both on s4node2.

[root@s4node2 ~]# pcs status
 ...
Resource Group: s4h_ASCS20_group
s4h_fs_ascs20 (ocf::heartbeat:Filesystem): Started
s4node2 s4h_vip_ascs20 (ocf::heartbeat:IPaddr2): Started s4node2
s4h_ascs20 (ocf::heartbeat:SAPInstance): Started s4node2
Resource Group: s4h_ERS29_group
s4h_fs_ers29 (ocf::heartbeat:Filesystem): Started
s4node2 s4h_vip_ers29 (ocf::heartbeat:IPaddr2): Started s4node2
s4h_ers29 (ocf::heartbeat:SAPInstance): Started s4node2
...

5.3. ERS moves to the previously failed node

Bring s4node1 back online and start the cluster:

[root@s4node1 ~]# pcs cluster start

ERS should move to s4node1, while ASCS remains on s4node2. Wait for ERS to finish the migration, and at the end the cluster should be in such state:

[root@node1 ~]# pcs status
...
Resource Group: s4h_ASCS20_group
s4h_fs_ascs20 (ocf::heartbeat:Filesystem): Started
s4node2 s4h_vip_ascs20 (ocf::heartbeat:IPaddr2): Started s4node2
s4h_ascs20 (ocf::heartbeat:SAPInstance): Started s4node2
Resource Group: s4h_ERS29_group
s4h_fs_ers29 (ocf::heartbeat:Filesystem): Started
s4node1 s4h_vip_ers29 (ocf::heartbeat:IPaddr2): Started s4node1
s4h_ers29 (ocf::heartbeat:SAPInstance): Started s4node1
...
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.