Chapter 4. Install Pacemaker
Please refer to the following documentation to first set up a pacemaker cluster.
Please make sure to follow the guidelines in Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH for the fencing/STONITH setup. Information about the fencing/STONITH agents supported for different platforms are available at Cluster Platforms and Architectures.
This guide will assume that the following things are working properly:
- Pacemaker cluster is configured according to documentation and has proper and working fencing
- Enqueue replication between the (A)SCS and ERS instances has been manually tested as explained in Setting up Enqueue Replication Server fail over
- The nodes are subscribed to the required channels as explained in RHEL for SAP Repositories and How to Enable Them
4.1. Configure general cluster properties
To avoid unnecessary failovers of the resources during initial testing and post production, set the following default values for the resource-stickiness and migration-threshold parameters. Note that defaults do not apply to resources which override them with their own defined values.
[root]# pcs resource defaults resource-stickiness=1 [root]# pcs resource defaults migration-threshold=3
Warning: As of RHEL 8.4 (pcs-0.10.8-1.el8), the commands above are deprecated. Use the commands below: +[source,text]
[root]# pcs resource defaults update resource-stickiness=1 [root]# pcs resource defaults update migration-threshold=3
Notes:
1. It is sufficient to run the commands above on one node of the cluster.
2. The command resource-stickiness=1
will encourage the resource to stay running where it is, while migration-threshold=3
will cause the resource to move to a new node after 3 failures. 3 is generally sufficient in preventing the resource from prematurely failing over to another node. This also ensures that the resource failover time stays within a controllable limit.
4.2. Install resource-agents-sap
on all cluster nodes
[root]# yum install resource-agents-sap
4.4. Configure ASCS resource group
4.4.1. Create resource for virtual IP address
[root]# pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.201 \ --group s4h_ASCS20_group
4.4.2. Create resource for ASCS filesystem.
Below is the example of creating resource for NFS filesystem
[root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='<NFS_Server>:<s4h_ascs20_nfs_share>' \ directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe \ --group s4h_ASCS20_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40
Below is the example of creating resources for HA-LVM filesystem
[root]# pcs resource create s4h_fs_ascs20_lvm LVM \ volgrpname='<ascs_volume_group>' exclusive=true \ --group s4h_ASCS20_group [root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='/dev/mapper/<ascs_logical_volume>' \ directory=/usr/sap/S4H/ASCS20 fstype=ext4 \ --group s4h_ASCS20_group
4.4.3. Create resource for ASCS instance
[root]# pcs resource create s4h_ascs20 SAPInstance \ InstanceName="S4H_ASCS20_s4ascs" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_s4ascs \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \ --group s4h_ASCS20_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600
Note: meta resource-stickiness=5000
is here to balance out the failover constraint with ERS so the resource stays on the node where it started and doesn’t migrate around the cluster uncontrollably.
Add a resource stickiness to the group to ensure that the ASCS will stay on a node if possible:
[root]# pcs resource meta s4h_ASCS20_group resource-stickiness=3000
4.5. Configure ERS resource group
4.5.1. Create resource for virtual IP address
[root]# pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.202 \ --group s4h_ERS29_group
4.5.2. Create resource for ERS filesystem
Below is the example of creating resource for NFS filesystem
[root]# pcs resource create s4h_fs_ers29 Filesystem \ device='<NFS_Server>:<s4h_ers29_nfs_share>' \ directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe \ --group s4h_ERS29_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 op monitor interval=200 timeout=40
Below is the example of creating resources for HA-LVM filesystem
[root]# pcs resource create s4h_fs_ers29_lvm LVM \ volgrpname='<ers_volume_group>' exclusive=true --group s4h_ERS29_group [root]# pcs resource create s4h_fs_ers29 Filesystem \ device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 \ fstype=ext4 --group s4h_ERS29_group
4.5.3. Create resource for ERS instance
Create an ERS instance cluster resource.
Note: In ENSA2 deployments the IS_ERS
attribute is optional. To learn more about IS_ERS
, additional information can be found in How does the IS_ERS attribute work on a SAP NetWeaver cluster with Standalone Enqueue Server (ENSA1 and ENSA2)?.
[root]# pcs resource create s4h_ers29 SAPInstance \ InstanceName="S4H_ERS29_s4ers" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_s4ers \ AUTOMATIC_RECOVER=false \ --group s4h_ERS29_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600
4.6. Create constraints
4.6.1. Create colocation constraint for ASCS and ERS resource groups
Resource groups s4h_ASCS20_group
and s4h_ERS29_group
should try to avoid running on the same node. Order of groups matters.
[root]# pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group \ -5000
4.6.2. Create location constraint for ASCS resource
ASCS20 instance rh2_ascs20
prefers to run on node where ERS was running before failover.
# pcs constraint location rh2_ascs20 rule score=2000 runs_ers_RH2 eq 1
4.6.3. Create order constraint for ASCS and ERS resource groups
Prefer to start s4h_ASCS20_group
before the s4h_ERS29_group
[root]# pcs constraint order start s4h_ASCS20_group then start \ s4h_ERS29_group symmetrical=false kind=Optional [root]# pcs constraint order start s4h_ASCS20_group then stop \ s4h_ERS29_group symmetrical=false kind=Optional
4.6.4. Create order constraint for /sapmnt
resource managed by cluster
If the shared filesystem /sapmnt
is managed by the cluster, then the following constraints ensure that resource groups with ASCS and ERS SAPInstance resources are started only once the filesystem is available.
[root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group [root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group