Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Setting up the cluster
4.1. Performing the basic cluster installation on each node
Please refer to Configuring and managing high availability clusters, to first set up a pacemaker cluster.
Please make sure to follow the guidelines in Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH for the fencing/STONITH setup. Information about the fencing/STONITH agents supported for different platforms is available at Cluster Platforms and Architectures.
The rest of this guide will assume that following things are working properly:
- Pacemaker cluster is configured according to documentation and has proper and working fencing (see How to test fence devices and fencing configuration in a Red Hat High Availability cluster?, for information on procedures for verifying that fencing is working properly).
- Enqueue replication between the (A)SCS and ERS instances has been manually tested, as explained in Setting up Enqueue Replication Server failover.
- All HA cluster nodes are subscribed to RHEL for SAP Applications or RHEL for SAP Solutions, and the required repositories are enabled as explained in RHEL for SAP Subscriptions and Repositories.
4.2. Configuring general cluster properties
To avoid unnecessary failovers of the resources set the default values for the resource-stickiness
and migration-threshold
parameters by running the following commands on one cluster node:
[root@node1]# pcs resource defaults update resource-stickiness=1 [root@node1]# pcs resource defaults update migration-threshold=3
The resource-stickiness=1
will encourage the resource to stay running where it is, while migration-threshold=3
will cause the resource to move to a new node after 3 failures. 3 is generally sufficient in preventing the resource from prematurely failing over to another node. This also ensures that the resource failover time stays within a controllable limit.
4.3. Installing the resource-agents-sap
package on all cluster nodes
The SAPInstance
and SAPDatabase
resource agents are provided via a separate resource-agents-sap
package. Run the following command to install it on each HAcluster node:
[root@node<x>]# dnf install resource-agents-sap
4.5. Configuring the (A)SCS
resource group
4.5.1. Creating resource for managing the virtual IP address of the (A)SCS
instance
To allow application servers and other clients to connect to the (A)SCS
instance on the HA cluster node where the instance is currently running, the virtual IP address (VIP) that has been assigned to the instance needs to be moved by the cluster when the (A)SCS
instance is moving from one HA cluster node to another.
For this, a resource that manages the VIP needs to be created as part of the resource group that is used for managing the (A)SCS
instance.
Please use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running.
On physical servers or VMs the resource can be created using the IPaddr2
resource agent:
[root@node1]# pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.101 --group s4h_ASCS20_group
4.5.2. Creating resource for managing the (A)SCS
instance directory
Since SAP requires that the instance directory be only available on the HA cluster node where the instance is supposed to be running it is necessary to set up HA cluster resources for managing the filesystems that are used for the instance directories.
Even if the instance directories are stored on NFS
it is still necessary to create the resource to allow the HA cluster to only mount the NFS
export on the HA cluster node where the SAP instance should be running.
4.5.2.1. NFS
If the instance directory for the (A)SCS
instance is located on NFS
, the resource to have it managed as part of the resource group for managing the (A)SCS
instance can be created with the following command:
[root@node1]# pcs resource create s4h_fs_ascs20 Filesystem device='<NFS_Server>:<s4h_ascs20_nfs_share>' directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe --group s4h_ASCS20_group \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40
4.5.2.2. HA-LVM
When using HA-LVM
to manage the instance directory for the (A)SCS
instance, it must be configured according to the guidelines in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
First, an LVM-activate
cluster resource must be added, followed by a Filesystem cluster resource:
[root@node1]# pcs resource create s4h_fs_ascs20_lvm LVM-activate volgrpname='<ascs_volume_group>' vg_access_mode=system_id --group s4h_ASCS20_group [root@node1]# pcs resource create s4h_fs_ascs20 Filesystem device='/dev/mapper/<ascs_logical_volume>' directory=/usr/sap/S4H/ASCS20 fstype=ext4 --group s4h_ASCS20_group
4.5.3. Creating resource for managing the (A)SCS
instance
[root@node1]# pcs resource create s4h_ascs20 SAPInstance InstanceName="S4H_ASCS20_rhascs" START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_rhascs AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 \ --group s4h_ASCS20_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600
resource-stickiness=5000
is used to balance out the failover constraint with the ERS
resource so the resource stays on the node where it started and doesn’t migrate around the cluster uncontrollably.
migration-threshold=1
ensures that the (A)SCS
instance fails over to another node when an issue is detected instead of restarting on the same HA cluster node. For ENSA2
setups this option is not required, since with ENSA2
restarting the (A)SCS
instance on the same HA cluster node is allowed.
When all resources for the resource group have been created, add a resource stickiness to the group to ensure that the (A)SCS
instance will stay on a HA cluster node if possible:
[root@node1]# pcs resource meta s4h_ASCS20_group resource-stickiness=3000
4.6. Configuring the ERS
resource group
4.6.1. Creating resource for managing the virtual IP address of the ERS
instance
Even though the ERS
instance is not directly accessed by application servers, it still requires a virtual IP to allow SAP management tools to connect to the ERS
instance on the HA cluster node where the instance is currently running. Therefore, the virtual IP address (VIP) that has been assigned to the instance needs to be moved by the cluster when the (A)SCS
instance is moving from one HA cluster node to another.
For this, a resource that manages the VIP needs to be created as part of the resource group that is used for managing the ERS
instance.
Please use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running.
On physical servers or VMs the resource can be created using the IPaddr2
resource agent:
[root@node1]# pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.102 --group s4h_ERS29_group
4.6.2. Creating resource for managing the ERS
instance directory
Since SAP requires that the instance directory be only available on the HA cluster node where the instance is supposed to be running it is necessary to set up HA cluster resources for managing the filesystems that are used for the instance directories.
Even if the instance directories are stored on NFS
it is still necessary to create the resource to allow the HA cluster to only mount the NFS
export on the HA cluster node where the SAP instance should be running.
4.6.2.1. NFS
If the instance directory for the ERS
instance is located on NFS
, the resource to have it managed as part of the resource group for managing the ERS instance can be created with the following command:
[root@node1]# pcs resource create s4h_fs_ers29 Filesystem device='<NFS_Server>:<s4h_ers29_nfs_share>' directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe --group s4h_ERS29_group \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40
4.6.2.2. HA-LVM
When using HA-LVM
to manage the instance directory for the ERS
instance it must be configured according to the guidelines in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
First, an LVM-activate
cluster resource must be added, followed by a Filesystem cluster resource:
[root@node1]# pcs resource create s4h_fs_ers29_lvm LVM-activate volgrpname='<ers_volume_group>' vg_access_mode=system_id --group s4h_ERS29_group # pcs resource create s4h_fs_ers29 Filesystem device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 fstype=ext4 --group s4h_ERS29_group
4.6.3. Creating resource for managing the ERS
instance
Create the ERS instance cluster resource:
[root@node1]# pcs resource create s4h_ers29 SAPInstance InstanceName="S4H_ERS29_rhers" START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_rhers AUTOMATIC_RECOVER=false IS_ERS=true --group s4h_ERS29_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600
IS_ERS=true
attribute is mandatory for ENSA1
deployment. More information about IS_ERS
can be found in How does the IS_ERS attribute work on a SAP NetWeaver cluster with Standalone Enqueue Server (ENSA1 and ENSA2)?.
4.7. Creating constraints
4.7.1. Creating colocation constraint for (A)SCS
and ERS
resource groups
Resource groups s4h_ASCS20_group
and s4h_ERS29_group
should try to avoid running on the same node. The order of the groups matters.
[root@node1]# pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group -5000
4.7.2. Creating location constraint for (A)SCS
resource (ENSA1
only)
When using ENSA1
, it must be ensured that the (A)SCS
instance moves to the node where the ERS
instance is running when failover is happening.
[root@node1]# pcs constraint location s4h_ascs20 rule score=2000 runs_ers_S4H eq 1
4.7.3. Creating order constraint for (A)SCS
and ERS
resource groups
Stop the resource group for managing the ERS
instance after the resource group for managing the (A)SCS
instance has been started, if pacemaker decides to start the resource group for managing the (A)SCS
instance at the same time as stopping the resource group for managing the ERS
instance:
[root@node1]# pcs constraint order start s4h_ASCS20_group then stop s4h_ERS29_group symmetrical=false kind=Optional
Since symmetrical=false
and kind=Optional
are used, there can be situation where this constraint won’t take effect. For more information, refer to Determining the order in which cluster resources are run.
4.7.4. Creating order constraints for /sapmnt
resource managed by cluster
If the shared filesystem /sapmnt
is managed by the cluster, then the following constraints ensure that resource groups used for managing the (A)SCS
and ERS
instances are started only after the /sapmnt
filesystem is available:
[root@node1]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group [root@node1]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group
4.8. Configuring cluster resource group for managing database instances (optional)
When using the HA cluster for managing a SAP NetWeaver based SAP product that is still using a legacy database like Oracle, IBM DB2, SAP ASE or SAP MaxDB, it is possible to also have the database instance managed by the cluster.
This chapter shows an example of how to set up a resource group for managing a single database instance using the SAPDatabase
resource agent and the virtual IP address and the file system required by it.
The example setup described in this chapter uses the SAPSID RH1
instead of S4H, since the SAPDatabase
resource agent cannot be used with S/4HANA setups.
4.8.1. Creating resource for managing the virtual IP address of the database instance
To create the resource for managing the virtual IP address used for accessing the database instance that will be part of the rh1_SAPDatabase_group
use the following command:
[root]# pcs resource create rh1_vip_db IPaddr2 ip=192.168.200.115 --group rh1_SAPDatabase_group
4.8.2. Creating resource for managing the directories used by the database instance
The directories used by the database instance can only be mounted on the HA cluster node where the database instance is supposed to run to avoid that the database instance can accidentally be started on another system at the same time which would cause data corruption.
Depending on the way the storage for managing the directories used by the database instance is set up, different methods for creating the resources for managing the database directories have to be used.
Even if the instance directories are stored on NFS
it is still necessary to create the resource to allow the HA cluster to only mount the NFS
export on the HA cluster node where the database instance should be running.
4.8.2.1. NFS
If the directories used by the database instance are located on NFS
, a resource to have them managed as part of the resource group for managing the database instance must be created for each directory with the following command:
[root@node1]# pcs resource create rh1_fs_db Filesystem device='<NFS_Server>:<rh1_db_nfs_share>' directory=/sapdb/RH1 fstype=nfs force_unmount=safe --group rh1_SAPDatabase_group \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40
4.8.2.2. HA-LVM
When using HA-LVM
to manage the directories used by the database instance it must be configured according to the guidelines in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
First, an LVM-activate
cluster resource must be added followed by a Filesystem cluster resource:
[root]# pcs resource create rh1_lvm_db LVM-activate volgrpname=vg_db vg_access_mode=system_id --group rh1_SAPDatabase_group [root]# pcs resource create rh1_fs_db Filesystem device=/dev/vg_db/lv_db directory=/sapdb/RH1 fstype=xfs --group rh1_SAPDatabase_group
If multiple file systems are used for the database directories, then a separate Filesystem cluster resource must be created for each one.
4.8.3. Configuring SAPDatabase
cluster resource
After the resources for the virtual IP address and the filesystems required by the database instance have been added, the SAPDatabase
cluster resource that will manage the database instance can be added to the resource group:
[root]# pcs resource create rh1_SAPDatabase SAPDatabase DBTYPE="ADA" SID="RH1" STRICT_MONITORING="TRUE" AUTOMATIC_RECOVER="TRUE" --group rh1_SAPDatabase_group
4.9. Configuring Primary/Additional Application Server (PAS/AAS) resource group (Optional)
This section describes how to configure a resource group for managing the Primary Application Server (PAS) instance and the associated VIP and filesystem for the instance directory, in case the PAS instance should also be managed by the HA cluster. The same configuration can also be used for Additional Application Server (AAS) instances that should be managed by the HA cluster.
4.9.1. Creating resource for managing the Virtual IP address (VIP) for the PAS/AAS instance
To allow other application servers and clients to PAS/AAS instances managed by the HA cluster, the virtual IP address (VIP) that has been assigned to the instance needs to be moved by the cluster when a PAS/AAS instance is moving from one HA cluster node to another.
For this, a resource that manages the VIP needs to be created as part of the resource group that is used for managing a PAS/AAS instance.
Please use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running.
On physical servers or VMs the resource can be created using the IPaddr2
resource agent:
[root@node1]# pcs resource create s4h_vip_pas_d21 IPaddr2 ip=192.168.200.103 --group s4h_PAS_D21_group
4.9.2. Creating resource for managing the filesystem for the PAS/AAS instance directory
Since SAP requires that the instance directory be only available on the HA cluster node where the instance is supposed to be running it is necessary to set up HA cluster resources for managing the filesystems that are used for the instance directories.
Even if the instance directories are stored on NFS
it is still necessary to create the resource to allow the HA cluster to only mount the NFS
export on the HA cluster node where the SAP instance should be running.
4.9.2.1. NFS
If the instance directory for the PAS/AAS instance is located on NFS
, the resource to have it managed as part of the resource group for managing the PAS/AAS instance can be created with the following command:
[root@node1]# pcs resource create s4h_fs_pas_d21 Filesystem device='<NFS_Server>:<s4h_pas_d21_nfs_share>' directory=/usr/sap/S4H/D21 fstype=nfs force_unmount=safe --group s4h_PAS21_D21_group \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40
4.9.2.2. HA-LVM
When using HA-LVM
to manage the instance directory for the PAS/AAS instance it must be configured according to the guidelines in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
First, an LVM-activate cluster resource must be added followed by a Filesystem cluster resource:
[root@node1]# pcs resource create s4h_lvm_pas_d21 LVM-activate volgrpname=vg_d21 vg_access_mode=system_id --group s4h_PAS_D21_group [root@node1]# pcs resource create s4h_fs_pas_d21 Filesystem device=/dev/vg_d21/lv_d21 directory=/usr/sap/S4H/D21 fstype=xfs --group s4h_PAS_D21_group
4.9.3. Configuring PAS/AAS SAPInstance
cluster resource
To enable pacemakers to manage a PAS or AAS instance, the same SAPInstance
resource agent as for (A)SCS/ERS
instance can be used. The PAS/AAS instance is, compared to the (A)SCS/ERS
instance setup, a simple instance and requires less attributes to be configured.
Check the command below for an example on how to create a PAS instance for the D21
instance and place it at the end of the s4h_PAS_D21_group
resource group.
[root@node1]# pcs resource create s4h_pas_d21 SAPInstance InstanceName="S4H_D21_s4h-pas" DIR_PROFILE=/sapmnt/S4H/profile START_PROFILE=/sapmnt/S4H/profile/S4H_D21_s4h-pas --group s4h_PAS_D21_group
4.9.4. Configuring constraints
4.9.4.1. Configuring order constraints for the PAS/AAS resource group(s)
The PAS/AAS instance(s) require the (A)SCS
and database instance to be running before it can start properly. The following sections show how to set up the required constraints based on the different types of database instances that can be used by SAP NetWeaver / S/4HANA.
4.9.4.1.1. Deployments with s4h_SAPDatabase_group
For configurations that have one cluster resource group, it will start all resources needed by the database. For example, here the SAPDatabase
resource agent is used to manage the database and is part of the database group rh1_SAPDatabase_group
. Commands below will create constraints that will start the whole rh1_PAS_D21_group
only once the (A)SCS
instance was promoted and when the database group rh1_SAPDatabase_group
is running.
[root@node1]# pcs constraint order rh1_SAPDatabase_group then rh1_PAS_D21_group kind=Optional symmetrical=false [root@node1]# pcs constraint order start rh1_ASCS20_group then rh1_PAS_D21_group kind=Optional symmetrical=false
4.9.4.1.2. Deployments with SAP HANA with System Replication as database
When using SAP HANA database that is configured for system replication (SR) that is managed by cluster, the following constraints will ensure that the whole s4h_PAS_D21_group
group will start only once the (A)SCS
instance was promoted and when the SAP HANA SAPHana_S4H_02-master
was promoted.
[root@node1]# pcs constraint order promote SAPHana_S4H_02-master then s4h_PAS_D21_group Kind=Optional symmetrical=false [root@node1]# pcs constraint order start s4h_ASCS20_group then s4h_PAS_D21_group Kind=Optional symmetrical=false
4.9.4.2. Configuring colocation constraint for PAS and AAS SAPInstance
cluster resources (Optional)
To ensure that PAS and AAS instances do not run on the same nodes whenever both nodes are running, you can add a negative colocation constraint with the command below:
[root@node1]# pcs constraint colocation add s4h_AAS_D22_group with s4h_PAS_D21_group score=-1000
The score of -1000
is to ensure that if only 1 node is available, the PAS/AAS instance will continue to run on the remaining 1 node. In such a situation, if you would like to keep the AAS instance down, then you can use the score=-INFINITY
which will enforce this condition.
4.9.4.3. Creating order constraint for /sapmnt
resource managed by cluster
If the shared filesystem /sapmnt
is managed by the cluster, then the following constraint ensures that resource groups used for managing the PAS/AAS instance(s) are started only after the /sapmnt
filesystem is available:
[root@node1]# pcs constraint order s4h_fs_sapmnt-clone then s4h_PAS_D21_group
4.10. Standalone Enqueue Server 2 (ENSA2) Multi-Node Cluster (optional)
For SAP S/4HANA with ENSA2, more than two HA cluster nodes can be used for managing the ASCS
and ERS
instances. Please use the guidelines in the following section in case an additional cluster node should be added to allow more flexibility for the instances to failover in case there is an issue with the node they were running on.
4.10.1. OS Configuration
Create a node that’s identical to the first two nodes, in terms of resources, subscriptions, OS configuration, etc.
In the example, the hostname of the node is node3. Make sure the /etc/hosts file on each cluster node contains the hostnames and IP addresses of all cluster nodes and also the virtual hostnames and virtual IP addresses of all SAP instances that are managed by the HA cluster.
Make sure to copy the SAP related entries in /etc/services
from one of the first two nodes to the third node.
4.10.2. Creating users and groups
Create the users and groups required by the SAP instances identical to the ones used on the other nodes. For example:
Groups in /etc/group: sapsys:x:1010: sapinst:x:1011:root,s4hadm Users in /etc/passwd: s4hadm:x:1020:1010:SAP System Administrator:/home/s4hadm:/bin/csh sapadm:x:1001:1010:SAP System Administrator:/home/sapadm:/bin/false
4.10.3. Creating local directories and mount points for the shared file systems
Create all mount points required for all instances that should be able to run on the additional HA cluster node:
/sapmnt /usr/sap/ /usr/sap/SYS/ /usr/sap/trans/ /usr/sap/S4H/ /usr/sap/S4H/ASCS20/ /usr/sap/S4H/ERS29/ /usr/sap/S4H/D<Ins#>/
Make sure to set the user and group ownership for all directories to the same user and group as on the other cluster nodes and copy the contents of the local directories (e. g., /usr/sap/SYS
) from one of the other cluster nodes.
If /sapmnt
and /usr/sap/trans
are statically mounted on the existing HA cluster nodes via /etc/fstab
then these file systems must also be added to the /etc/fstab
on the additional HA cluster node and the file systems must be mounted afterwards.
If /sapmnt
and /usr/sap/trans
are managed by the cluster then the cluster configuration must be updated so that the file systems will also be mounted on the additional HA cluster node.
4.10.4. Installing the RHEL HA Add-On and the resource agents for managing SAP instances
For the node to be part of the cluster and to be able to manage the SAP instances, install the required packages:
[root@node3]# dnf install pcs pacemaker resource-agents-sap
4.10.5. Adding the node to the cluster
On one node of the existing cluster add the third node:
[root@node1]# pcs cluster auth node3 Username: hacluster Password: [root@node1]# pcs cluster node add node3
4.10.6. Updating fencing/STONITH configuration to include the 3rd node
Depending on the STONITH setup, you may need to update the STONITH resource to include the 3rd HA cluster node.
Before moving any resources to the new HA cluster node, please use the following command to verify that it is possible to fence the HA cluster new node from one of the existing HA cluster nodes:
[root@node1]# pcs stonith fence node3
4.10.7. Updating ERS
resource configuration
To ensure that the ERS
instance stays on the node where it started and doesn’t migrate around the cluster uncontrollably, set resource-stickiness
for the resource:
[root@node1]# pcs resource meta s4h_ers29 \ resource-stickiness=3000
4.11. Enabling the SAP HA interface to allow SAP instances controlled by the cluster to be managed by SAP Management tools (Optional)
To allow SAP admins to manage the SAP application server instances that are controlled by the HA cluster setup described in this documentation using tools like SAP Landscape Management (LaMa), the SAP HA interface must be enabled for each SAP application server instance managed by the HA cluster to ensure that the HA cluster is aware of any action performed by the SAP management tools that will affect the cluster resources used to manage the SAP instances (for example, the HA cluster needs to be notified if a SAP app server instance it manages is being started or stopped via SAP LaMa).
Please check How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On?, for instructions on how to configure the SAP HA interface.
4.12. Enabling cluster to auto-start at boot (optional)
By default the HA cluster is not enabled to auto-start at the boot of the OS, and it needs to be manually started after a cluster node is fenced and rebooted.
The automatic start of all cluster components on all cluster nodes can be enabled with the following command:
[root@node1]# pcs cluster enable --all
In some situations it can be beneficial not to have the cluster auto-start after a node has been rebooted. For example, if there is an issue with a filesystem that is required by a cluster resource, the filesystem needs to be repaired first before it can be used again. Having the cluster auto-start, but then fail because the filesystem doesn’t work, can cause even more trouble.